2017-10-02 3 views
0

Ich habe Docker Container (Hadoop Installation https://github.com/kiwenlau/hadoop-cluster-docker), die ich mit Befehl ohne ein Problem ausführen kann, aber beim Versuch, dasselbe Image auf kubernetes zu implementieren, Pod nicht gestartet werden kann. Um Bereitstellung zu erstellen Ich bin mit kubectl run hadoop-master --image=kiwenlau/hadoop:1.0 --port=8088 --port=50070 BefehlDocker Container laufen eigenständig, aber schlägt in kubernetes

Hier Protokoll

pod Befehl beschreiben
Events: 
    FirstSeen  LastSeen  Count From             SubObjectPath     Type   Reason   Message 
    ---------  --------  ----- ----             -------------     --------  ------   ------- 
    6m   6m    1  default-scheduler                  Normal   Scheduled  Successfully assigned hadoop-master-2828539450-rnwsd to gke-mtd-cluster-default-pool-6b97d4d0-hcbt 
    6m   6m    1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal   Created   Created container with id 1560ff87e0e7357c76cec89f5f429e0b9b5fc51523c79e4e2c12df1834d7dd75 
    6m   6m    1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal   Started   Started container with id 1560ff87e0e7357c76cec89f5f429e0b9b5fc51523c79e4e2c12df1834d7dd75 
    6m   6m    1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal   Created   Created container with id c939d3336687a33e69d37aa73177e673fd56d766cb499a4235e89d554d233c37 
    6m   6m    1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal   Started   Started container with id c939d3336687a33e69d37aa73177e673fd56d766cb499a4235e89d554d233c37 
    6m   6m    2  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt          Warning   FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 10s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)" 

    6m 6m  1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Created   Created container with id 7d1c67686c039e459ee0ea3936eedb4996a5201f6a1fec02ac98d219bb07745f 
    6m 6m  1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Started   Started container with id 7d1c67686c039e459ee0ea3936eedb4996a5201f6a1fec02ac98d219bb07745f 
    6m 6m  2  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt          Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 20s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)" 

    5m 5m  1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Started   Started container with id a8879a2c794b3e62f788ad56e403cb619644e9219b2c092e760ddeba506b2e44 
    5m 5m  1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Created   Created container with id a8879a2c794b3e62f788ad56e403cb619644e9219b2c092e760ddeba506b2e44 
    5m 5m  3  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt          Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 40s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)" 

    5m 5m  1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Created   Created container with id 8907cdf19c51b87cea6e1e611649e874db2c21f47234df54bd9f27515cee0a0e 
    5m 5m  1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Started   Started container with id 8907cdf19c51b87cea6e1e611649e874db2c21f47234df54bd9f27515cee0a0e 
    5m 3m  7  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt          Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)" 

    3m 3m  1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Created   Created container with id 294072caea596b47324914a235c1882dbc521cc355644a1e25ebf06f0e04301f 
    3m 3m  1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Started   Started container with id 294072caea596b47324914a235c1882dbc521cc355644a1e25ebf06f0e04301f 
    3m 1m  12  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt          Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)" 

    6m 50s  7  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Pulled   Container image "kiwenlau/hadoop:1.0" already present on machine 
    50s 50s  1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Created   Created container with id 7da7508ac864d04d47639b0d2c374a27c3e8a3351e13a2564e57453cf857426d 
    50s 50s  1  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Normal Started   Started container with id 7da7508ac864d04d47639b0d2c374a27c3e8a3351e13a2564e57453cf857426d 
    6m 0s  31  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt  spec.containers{hadoop-master} Warning BackOff   Back-off restarting failed container 
    49s 0s  5  kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt          Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)" 

kubectl Ausgang log:

kubectl logs hadoop-master-2828539450-rnwsd 
* Starting OpenBSD Secure Shell server sshd 
    ...done. 

Hinweis, dass Container Andockfenster selbst hadoop nicht starten. Um es zu starten, muss ich eine Verbindung zum Container herstellen und hadoop manuell starten, aber ich möchte den Container jetzt einfach in K8s ausführen können.

Dank

+0

Wie erstellen Sie den Pod? Gibt es Volumen und andere Objekte, die während der Erstellung angegeben wurden? –

+0

@MohammadMahzoun erstellen mit Befehl kubectl run hadoop-master --image = kiwenlau/hadoop: 1.0 --port = 8088 --port = 50070 wie in Frage gestellt. Während der Pod-Erstellung wurden keine Datenträger angegeben. –

Antwort

1

Der entsprechende Befehl in Kubernetes ist

kubectl run -it hadoop-master --image=kiwenlau/hadoop:1.0 --port=8088 --port=50070 

es geht nicht um Kubernetes Eigentlich gibt etwas falsch in der Dockerfile ist.

A docker container exits when its main process finishes.

Mit CMD [ "sh", "-c", "service ssh start; bash"], SSH-Dienst wird im Hintergrund gestartet, nachdem dann der Behälter gestoppt als Job beendet.

Es sollte immer ein ausführbares Skript/Programm im Vordergrund laufen, wie ~/start-hadoop.sh.

FYI, normalerweise brauchen wir nicht ssh zum Behälter, als docker exec -it some_container bash sollte genug sein.

+0

Vielen Dank. Das Problem war zwar in Docker, aber Hadoop muss SSH-Zugriff vom Master auf die Slaves haben und umgekehrt –

Verwandte Themen