Skip to content

Commit 7ea6c59

Browse files
authored
docs(connectors): clarifies scope of autorestart when enabled (strimzi#9123)
Signed-off-by: prmellor <[email protected]>
1 parent 6809031 commit 7ea6c59

File tree

5 files changed

+39
-59
lines changed

5 files changed

+39
-59
lines changed

documentation/modules/configuring/con-config-mirrormaker2.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ Standard Apache Kafka configuration may be provided, restricted to those propert
220220
<16> Cluster alias for the target cluster used by the MirrorMaker 2 connectors.
221221
<17> Configuration for the `MirrorSourceConnector` that creates remote topics. The `config` overrides the default configuration options.
222222
<18> The maximum number of tasks that the connector may create. Tasks handle the data replication and run in parallel. If the infrastructure supports the processing overhead, increasing this value can improve throughput. Kafka Connect distributes the tasks between members of the cluster. If there are more tasks than workers, workers are assigned multiple tasks. For sink connectors, aim to have one task for each topic partition consumed. For source connectors, the number of tasks that can run in parallel may also depend on the external system. The connector creates fewer than the maximum number of tasks if it cannot achieve the parallelism.
223-
<19> Enables automatic restarts of failed connectors and tasks.
223+
<19> Enables automatic restarts of failed connectors and tasks. By default, the number of restarts is indefinite, but you can set a maximum on the number of automatic restarts using the `maxRestarts` property.
224224
<20> Replication factor for mirrored topics created at the target cluster.
225225
<21> Replication factor for the `MirrorSourceConnector` `offset-syncs` internal topic that maps the offsets of the source and target clusters.
226226
<22> When ACL rules synchronization is enabled, ACLs are applied to synchronized topics. The default is `true`. This feature is not compatible with the User Operator. If you are using the User Operator, set this property to `false`.

documentation/modules/configuring/proc-manual-restart-mirrormaker2-connector-task.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
// assembly-management-tasks.adoc
33

44
[id='proc-manual-restart-mirrormaker2-connector-task-{context}']
5-
= Performing restarts of MirrorMaker 2 connector task using annotations
5+
= Performing restarts of MirrorMaker 2 connector tasks using annotations
66

77
This procedure describes how to manually trigger a restart of a Kafka MirrorMaker 2 connector task by using a Kubernetes annotation.
88

documentation/modules/deploying/proc-deploying-kafkaconnector.adoc

Lines changed: 8 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,7 @@ You remove a connector by deleting its corresponding `KafkaConnector`.
1515

1616
`KafkaConnector` resources must be deployed to the same namespace as the Kafka Connect cluster they link to.
1717

18-
In the configuration shown in this procedure, the `autoRestart` features is enabled (`enabled: true`).
19-
This enables automatic restarts of failed connectors and tasks.
18+
In the configuration shown in this procedure, the `autoRestart` feature is enabled (`enabled: true`) for automatic restarts of failed connectors and tasks.
2019
You can also annotate the `KafkaConnector` resource to xref:proc-manual-restart-connector-str[restart a connector] or xref:proc-manual-restart-connector-task-str[restart a connector task] manually.
2120

2221
.Example connectors
@@ -65,33 +64,9 @@ With the `KafkaConnector` resources enabled, the Cluster Operator watches for th
6564

6665
. Edit the `examples/connect/source-connector.yaml` file:
6766
+
68-
[source,yaml,subs="attributes+"]
69-
----
70-
apiVersion: {KafkaConnectorApiVersion}
71-
kind: KafkaConnector
72-
metadata:
73-
name: my-source-connector # <1>
74-
labels:
75-
strimzi.io/cluster: my-connect-cluster # <2>
76-
spec:
77-
class: org.apache.kafka.connect.file.FileStreamSourceConnector # <3>
78-
tasksMax: 2 # <4>
79-
autoRestart: # <5>
80-
enabled: true
81-
config: # <6>
82-
file: "/opt/kafka/LICENSE" # <7>
83-
topic: my-topic <8>
84-
# ...
85-
----
86-
+
87-
<1> Name of the `KafkaConnector` resource, which is used as the name of the connector. Use any name that is valid for a Kubernetes resource.
88-
<2> Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to.
89-
<3> Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster.
90-
<4> Maximum number of Kafka Connect tasks that the connector can create.
91-
<5> Enables automatic restarts of failed connectors and tasks.
92-
<6> xref:kafkaconnector-configs[Connector configuration] as key-value pairs.
93-
<7> This example source connector configuration reads data from the `/opt/kafka/LICENSE` file.
94-
<8> Kafka topic to publish the source data to.
67+
--
68+
include::../../shared/snip-example-source-connector-config.adoc[]
69+
--
9570

9671
. Create the source `KafkaConnector` in your Kubernetes cluster:
9772
+
@@ -118,11 +93,11 @@ metadata:
11893
labels:
11994
strimzi.io/cluster: my-connect
12095
spec:
121-
class: org.apache.kafka.connect.file.FileStreamSinkConnector <1>
96+
class: org.apache.kafka.connect.file.FileStreamSinkConnector # <1>
12297
tasksMax: 2
123-
config: <2>
124-
file: "/tmp/my-file" <3>
125-
topics: my-topic <4>
98+
config: # <2>
99+
file: "/tmp/my-file" # <3>
100+
topics: my-topic # <4>
126101
----
127102
+
128103
<1> Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster.

documentation/modules/overview/con-configuration-points-connect.adoc

Lines changed: 1 addition & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -183,30 +183,7 @@ You can also specify where the data should sit in Kafka by specifying a target t
183183
Use `tasksMax` to specify the maximum number of tasks.
184184
For example, a source connector with `tasksMax: 2` might split the import of source data into two tasks.
185185

186-
.Example KafkaConnector source connector configuration
187-
[source,yaml,subs="attributes+"]
188-
----
189-
apiVersion: {KafkaConnectApiVersion}
190-
kind: KafkaConnector
191-
metadata:
192-
name: my-source-connector <1>
193-
labels:
194-
strimzi.io/cluster: my-connect-cluster <2>
195-
spec:
196-
class: org.apache.kafka.connect.file.FileStreamSourceConnector <3>
197-
tasksMax: 2 <4>
198-
config: <5>
199-
file: "/opt/kafka/LICENSE" <6>
200-
topic: my-topic <7>
201-
# ...
202-
----
203-
<1> Name of the `KafkaConnector` resource, which is used as the name of the connector. Use any name that is valid for a Kubernetes resource.
204-
<2> Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to.
205-
<3> Full name of the connector class. This should be present in the image being used by the Kafka Connect cluster.
206-
<4> Maximum number of Kafka Connect tasks that the connector can create.
207-
<5> link:{BookURLDeploying}#kafkaconnector-configs[Connector configuration^] as key-value pairs.
208-
<6> Location of the external data file. In this example, we're configuring the `FileStreamSourceConnector` to read from the `/opt/kafka/LICENSE` file.
209-
<7> Kafka topic to publish the source data to.
186+
include::../../shared/snip-example-source-connector-config.adoc[]
210187

211188
NOTE: You can link:{BookURLDeploying}#assembly-loading-config-with-providers-str[load confidential configuration values for a connector^] from external sources, such as Kubernetes Secrets or ConfigMaps.
212189

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
//source connector example
2+
.Example KafkaConnector source connector configuration
3+
[source,yaml,subs="attributes+"]
4+
----
5+
apiVersion: {KafkaConnectApiVersion}
6+
kind: KafkaConnector
7+
metadata:
8+
name: my-source-connector # <1>
9+
labels:
10+
strimzi.io/cluster: my-connect-cluster # <2>
11+
spec:
12+
class: org.apache.kafka.connect.file.FileStreamSourceConnector # <3>
13+
tasksMax: 2 # <4>
14+
autoRestart: # <5>
15+
enabled: true
16+
config: # <6>
17+
file: "/opt/kafka/LICENSE" # <7>
18+
topic: my-topic # <8>
19+
# ...
20+
----
21+
<1> Name of the `KafkaConnector` resource, which is used as the name of the connector. Use any name that is valid for a Kubernetes resource.
22+
<2> Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to.
23+
<3> Full name of the connector class. This should be present in the image being used by the Kafka Connect cluster.
24+
<4> Maximum number of Kafka Connect tasks that the connector can create.
25+
<5> Enables automatic restarts of failed connectors and tasks. By default, the number of restarts is indefinite, but you can set a maximum on the number of automatic restarts using the `maxRestarts` property.
26+
<6> link:{BookURLDeploying}#kafkaconnector-configs[Connector configuration^] as key-value pairs.
27+
<7> Location of the external data file. In this example, we're configuring the `FileStreamSourceConnector` to read from the `/opt/kafka/LICENSE` file.
28+
<8> Kafka topic to publish the source data to.

0 commit comments

Comments
 (0)