Ich habe konfluent kafka (v4.0.0) makler und 3 zoo keepers läuft in docker-compose. Ein Testthema wurde mit 10 Partitionen mit dem Replikationsfaktor 3 erstellt. Wenn ein Console-Consumer erstellt wird, ohne die Option -group (wobei group.id automatisch zugewiesen wird) zu übergeben, kann er kontinuierlich Nachrichten konsumieren, selbst wenn ein Broker vorhanden ist getötet und der Makler ist wieder online.kafka-console-consumer läuft in docker container stoppt konsum von nachrichten
Wenn ich jedoch einen Console-Consumer mit der Option -group ('console-group') erstelle, wird der Nachrichtenverbrauch gestoppt, nachdem ein Kafka-Broker beendet wurde.
$ docker run --net=host confluentinc/cp-kafka:4.0.0 kafka-console-consumer --bootstrap-server localhost:19092,localhost:29092,localhost:39092 --topic starcom.status --from-beginning --group console-group
<< some messages consumed >>
<< broker got killed >>
[2017-12-31 18:34:05,344] WARN [Consumer clientId=consumer-1, groupId=console-group] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
<< no message after this >>
Auch nachdem der Broker wieder online ist, konsumiert die Consumer-Gruppe keine weiteren Nachrichten.
Seltsame Sache ist, dass es keine Verzögerung für diese Verbrauchergruppe gibt, als ich mit kafka-consumer-groups tool überprüfte. Mit anderen Worten, Verbraucher-Offsets schreiten für diese Verbrauchergruppe voran. Es gibt keinen anderen Kunden, der mit der group.id läuft, also stimmt etwas nicht.
Basierend auf Protokollen scheint die Gruppe stabilisiert worden zu sein.
kafka-2_1 | [2017-12-31 17:35:40,743] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 0 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:35:43,746] INFO [GroupCoordinator 2]: Stabilized group console-group generation 1 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:35:43,765] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 1 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:54:30,228] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 1 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:54:31,162] INFO [GroupCoordinator 2]: Stabilized group console-group generation 2 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:54:31,173] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 2 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:57:25,273] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 2 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:57:28,256] INFO [GroupCoordinator 2]: Stabilized group console-group generation 3 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:57:28,267] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 3 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:57:53,594] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 3 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:57:55,322] INFO [GroupCoordinator 2]: Stabilized group console-group generation 4 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 17:57:55,336] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 4 (kafka.coordinator.group.GroupCoordinator)
kafka-3_1 | [2017-12-31 18:15:07,953] INFO [GroupCoordinator 3]: Preparing to rebalance group console-group-2 with old generation 0 (__consumer_offsets-22) (kafka.coordinator.group.GroupCoordinator)
kafka-3_1 | [2017-12-31 18:15:10,987] INFO [GroupCoordinator 3]: Stabilized group console-group-2 generation 1 (__consumer_offsets-22) (kafka.coordinator.group.GroupCoordinator)
kafka-3_1 | [2017-12-31 18:15:11,044] INFO [GroupCoordinator 3]: Assignment received from leader for group console-group-2 for generation 1 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:08:59,087] INFO [GroupCoordinator 2]: Loading group metadata for console-group with generation 4 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:09:02,453] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 4 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:09:03,309] INFO [GroupCoordinator 2]: Stabilized group console-group generation 5 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:09:03,471] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 5 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:10:32,010] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 5 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:10:34,006] INFO [GroupCoordinator 2]: Stabilized group console-group generation 6 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:10:34,040] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 6 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:12:02,014] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 6 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:12:09,449] INFO [GroupCoordinator 2]: Stabilized group console-group generation 7 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:12:09,466] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 7 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:16:29,277] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 7 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:16:31,924] INFO [GroupCoordinator 2]: Stabilized group console-group generation 8 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:16:31,945] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 8 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:17:54,813] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 8 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:18:01,256] INFO [GroupCoordinator 2]: Stabilized group console-group generation 9 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:18:01,278] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 9 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:33:47,316] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 9 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:33:49,709] INFO [GroupCoordinator 2]: Stabilized group console-group generation 10 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:33:49,745] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 10 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:34:05,484] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 10 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:34:07,845] INFO [GroupCoordinator 2]: Stabilized group console-group generation 11 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 18:34:07,865] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 11 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 19:34:16,436] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 11 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 19:34:18,221] INFO [GroupCoordinator 2]: Stabilized group console-group generation 12 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1 | [2017-12-31 19:34:18,248] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 12 (kafka.coordinator.group.GroupCoordinator)
Und Thema Replikation passierte alles normalerweise.
$ docker run --net=host confluentinc/cp-kafka:4.0.0 kafka-topics --zookeeper localhost:22181 --topic starcom.status --describe
Topic:starcom.status PartitionCount:10 ReplicationFactor:3 Configs:
Topic: starcom.status Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 2,3,1
Topic: starcom.status Partition: 1 Leader: 1 Replicas: 1,2,3 Isr: 3,2,1
Topic: starcom.status Partition: 2 Leader: 2 Replicas: 2,3,1 Isr: 3,2,1
Topic: starcom.status Partition: 3 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
Topic: starcom.status Partition: 4 Leader: 1 Replicas: 1,3,2 Isr: 3,2,1
Topic: starcom.status Partition: 5 Leader: 2 Replicas: 2,1,3 Isr: 3,2,1
Topic: starcom.status Partition: 6 Leader: 3 Replicas: 3,1,2 Isr: 2,3,1
Topic: starcom.status Partition: 7 Leader: 1 Replicas: 1,2,3 Isr: 3,2,1
Topic: starcom.status Partition: 8 Leader: 2 Replicas: 2,3,1 Isr: 3,2,1
Topic: starcom.status Partition: 9 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
$ docker run --net=host confluentinc/cp-kafka:4.0.0 kafka-topics --zookeeper localhost:22181 --topic starcom.status --describe
Topic:starcom.status PartitionCount:10 ReplicationFactor:3 Configs:
Topic: starcom.status Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 2,3
Topic: starcom.status Partition: 1 Leader: 2 Replicas: 1,2,3 Isr: 3,2
Topic: starcom.status Partition: 2 Leader: 2 Replicas: 2,3,1 Isr: 3,2
Topic: starcom.status Partition: 3 Leader: 3 Replicas: 3,2,1 Isr: 3,2
Topic: starcom.status Partition: 4 Leader: 3 Replicas: 1,3,2 Isr: 3,2
Topic: starcom.status Partition: 5 Leader: 2 Replicas: 2,1,3 Isr: 3,2
Topic: starcom.status Partition: 6 Leader: 3 Replicas: 3,1,2 Isr: 2,3
Topic: starcom.status Partition: 7 Leader: 2 Replicas: 1,2,3 Isr: 3,2
Topic: starcom.status Partition: 8 Leader: 2 Replicas: 2,3,1 Isr: 3,2
Topic: starcom.status Partition: 9 Leader: 3 Replicas: 3,2,1 Isr: 3,2
$ docker run --net=host confluentinc/cp-kafka:4.0.0 kafka-topics --zookeeper localhost:22181 --topic starcom.status --describe
Topic:starcom.status PartitionCount:10 ReplicationFactor:3 Configs:
Topic: starcom.status Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 2,3,1
Topic: starcom.status Partition: 1 Leader: 1 Replicas: 1,2,3 Isr: 3,2,1
Topic: starcom.status Partition: 2 Leader: 2 Replicas: 2,3,1 Isr: 3,2,1
Topic: starcom.status Partition: 3 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
Topic: starcom.status Partition: 4 Leader: 1 Replicas: 1,3,2 Isr: 3,2,1
Topic: starcom.status Partition: 5 Leader: 2 Replicas: 2,1,3 Isr: 3,2,1
Topic: starcom.status Partition: 6 Leader: 3 Replicas: 3,1,2 Isr: 2,3,1
Topic: starcom.status Partition: 7 Leader: 1 Replicas: 1,2,3 Isr: 3,2,1
Topic: starcom.status Partition: 8 Leader: 2 Replicas: 2,3,1 Isr: 3,2,1
Topic: starcom.status Partition: 9 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
Ist dies eine Einschränkung (konfluent) Kafka-Konsole Verbraucher? Grundsätzlich versuche ich sicherzustellen, dass meine echten Java Kafka-Konsumenten die Ausfallzeiten des Brokers überleben können, indem sie diesen kleineren Test ausführen.
Jede Hilfe wird geschätzt.
EDIT (Jahr 2018!):
ich meine Docker (-compose) Umgebung völlig neu und war in der Lage, dies zu reproduzieren. Dieses Mal habe ich die Consumer-Gruppe "Neue Gruppe" erstellt, und der Console-Consumer hat nach dem Neustart des Brokers den Fehler unterschritten. Und seitdem werden Nachrichten nicht verbraucht. Nach dem Verbrauchergruppen-Tool gehen Konsumentenoffsets weiter voran.
[2018-01-01 19:18:32,935] ERROR [Consumer clientId=consumer-1, groupId=new-group] Offset commit failed on partition starcom.status-4 at offset 0: This is not the correct coordinator. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2018-01-01 19:18:32,936] WARN [Consumer clientId=consumer-1, groupId=new-group] Asynchronous auto-commit of offsets {starcom.status-4=OffsetAndMetadata{offset=0, metadata=''}, starcom.status-5=OffsetAndMetadata{offset=0, metadata=''}, starcom.status-6=OffsetAndMetadata{offset=2, metadata=''}} failed: Offset commit failed with a retriable exception. You should retry committing offsets. The underlying error was: This is not the correct coordinator. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
Der Console Consumer ist nur ein Wrapper um die Java-API, aber es ist nicht in der Lage, die Verbindung zum neuen Partition Leader wieder herzustellen? Wenn dem so ist, scheint es ein Netzwerk-/Konfigurationsproblem zu sein. Wenn Sie nur einen Andock-Container löschen, werden alle zugehörigen Daten für diesen Broker verloren. –
Vielen Dank für den Kommentar. Jedoch, wie ich oben sagte, wenn ich nicht - Gruppenoption überlasse, kann der Konsolenkonsument einen unangemessenen Brokerausfall überstehen --- was eine sehr gute Sache ist. Dies lässt mich glauben, dass es sich nicht um ein Netzwerk-/Konfigurationsproblem handelt. – nanaboo
bereitgestellt mehr Fehlermeldung über – nanaboo