Ich habe eine Sturm Topologie für die ich tun: setNumWorkers (1);auf Storm 0.10.0 2 Arbeiterprozesse werden gestartet, selbst wenn ich Arbeiter = 1 setze, und UI meldet, dass Arbeiter = 1
Wenn ich den Sturm-UI-Bericht zu dieser laufenden Topologie anschaue, sehe ich Num-Worker auf "1" gesetzt.
Wenn ich jedoch in den Knoten anmelden, der den Supervisor ausführt, sehe ich zwei Prozesse, die die gleiche Einstellung für -Dworker.id und für -Dworker.port haben.
Ich schließe die Ausgabe von dem ein, was "ps" mir für diese zwei Prozesse unten zeigt. Meine Frage ist: Warum gibt es zwei Prozesse, die als Worker Prozesse konfiguriert scheinen, wenn ich nur einen angefordert (Hinweis: die Storm UI bestätigt, dass ich nur einen Arbeiter habe.)
Dies ist wichtig für mich, weil wenn ich es tue Jegliche Profilerstellung oder Analyse, welche Ressourcen von meiner Topologie verbraucht werden, möchte ich wissen, auf welchen Prozess ich auf Null setze.
ps Ausgang
root 787 20.0 0.6 5858228 78388 ? Sl 05:04 0:00 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /opt/apache-storm-0.10.0/lib/log4j-slf4j-impl-2.1.jar:/opt/apache-storm-0.10.0/lib/servlet-api-2.5.jar:/opt/apache-storm-0.10.0/lib/clojure-1.6.0.jar:/opt/apache-storm-0.10.0/lib/slf4j-api-1.7.7.jar:/opt/apache-storm-0.10.0/lib/hadoop-auth-2.4.0.jar:/opt/apache-storm-0.10.0/lib/log4j-api-2.1.jar:/opt/apache-storm-0.10.0/lib/disruptor-2.10.4.jar:/opt/apache-storm-0.10.0/lib/storm-core-0.10.0.jar:/opt/apache-storm-0.10.0/lib/log4j-over-slf4j-1.6.6.jar:/opt/apache-storm-0.10.0/lib/log4j-core-2.1.jar:/opt/apache-storm-0.10.0/lib/asm-4.0.jar:/opt/apache-storm-0.10.0/lib/kryo-2.21.jar:/opt/apache-storm-0.10.0/lib/reflectasm-1.07-shaded.jar:/opt/apache-storm-0.10.0/lib/minlog-1.2.jar:/opt/apache-storm-0.10.0/conf:/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/stormjar.jar -Dlogfile.name=big-storm-job-1-1487739502-worker-6700.log -Dstorm.home=/opt/apache-storm-0.10.0 -Dstorm.id=big-storm-job-1-1487739502 -Dworker.id=e8e03e95-1fcc-492a-b5e4-51ef7b8db2ee -Dworker.port=6700 -Dstorm.log.dir=/opt/apache-storm-0.10.0/logs -Dlog4j.configurationFile=/opt/apache-storm-0.10.0/log4j2/worker.xml backtype.storm.LogWriter /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx768m -Djava.library.path=/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/resources/Linux-amd64:/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/resources:/usr/local/lib:/opt/local/lib:/usr/lib -Dlogfile.name=big-storm-job-1-1487739502-worker-6700.log -Dstorm.home=/opt/apache-storm-0.10.0 -Dstorm.conf.file= -Dstorm.options= -Dstorm.log.dir=/opt/apache-storm-0.10.0/logs -Dlogging.sensitivity=S3 -Dlog4j.configurationFile=/opt/apache-storm-0.10.0/log4j2/worker.xml -Dstorm.id=big-storm-job-1-1487739502 -Dworker.id=e8e03e95-1fcc-492a-b5e4-51ef7b8db2ee -Dworker.port=6700 -cp /opt/apache-storm-0.10.0/lib/log4j-slf4j-impl-2.1.jar:/opt/apache-storm-0.10.0/lib/servlet-api-2.5.jar:/opt/apache-storm-0.10.0/lib/clojure-1.6.0.jar:/opt/apache-storm-0.10.0/lib/slf4j-api-1.7.7.jar:/opt/apache-storm-0.10.0/lib/hadoop-auth-2.4.0.jar:/opt/apache-storm-0.10.0/lib/log4j-api-2.1.jar:/opt/apache-storm-0.10.0/lib/disruptor-2.10.4.jar:/opt/apache-storm-0.10.0/lib/storm-core-0.10.0.jar:/opt/apache-storm-0.10.0/lib/log4j-over-slf4j-1.6.6.jar:/opt/apache-storm-0.10.0/lib/log4j-core-2.1.jar:/opt/apache-storm-0.10.0/lib/asm-4.0.jar:/opt/apache-storm-0.10.0/lib/kryo-2.21.jar:/opt/apache-storm-0.10.0/lib/reflectasm-1.07-shaded.jar:/opt/apache-storm-0.10.0/lib/minlog-1.2.jar:/opt/apache-storm-0.10.0/conf:/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/stormjar.jar backtype.storm.daemon.worker big-storm-job-1-1487739502 8fde2226-4b32-406d-8809-81ed88e5ae1f 6700 e8e03e95-1fcc-492a-b5e4-51ef7b8db2ee
root 805 203 2.0 4308648 255336 ? Sl 05:04 0:06 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx768m -Djava.library.path=/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/resources/Linux-amd64:/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/resources:/usr/local/lib:/opt/local/lib:/usr/lib -Dlogfile.name=big-storm-job-1-1487739502-worker-6700.log -Dstorm.home=/opt/apache-storm-0.10.0 -Dstorm.conf.file= -Dstorm.options= -Dstorm.log.dir=/opt/apache-storm-0.10.0/logs -Dlogging.sensitivity=S3 -Dlog4j.configurationFile=/opt/apache-storm-0.10.0/log4j2/worker.xml -Dstorm.id=big-storm-job-1-1487739502 -Dworker.id=e8e03e95-1fcc-492a-b5e4-51ef7b8db2ee -Dworker.port=6700 -cp /opt/apache-storm-0.10.0/lib/log4j-slf4j-impl-2.1.jar:/opt/apache-storm-0.10.0/lib/servlet-api-2.5.jar:/opt/apache-storm-0.10.0/lib/clojure-1.6.0.jar:/opt/apache-storm-0.10.0/lib/slf4j-api-1.7.7.jar:/opt/apache-storm-0.10.0/lib/hadoop-auth-2.4.0.jar:/opt/apache-storm-0.10.0/lib/log4j-api-2.1.jar:/opt/apache-storm-0.10.0/lib/disruptor-2.10.4.jar:/opt/apache-storm-0.10.0/lib/storm-core-0.10.0.jar:/opt/apache-storm-0.10.0/lib/log4j-over-slf4j-1.6.6.jar:/opt/apache-storm-0.10.0/lib/log4j-core-2.1.jar:/opt/apache-storm-0.10.0/lib/asm-4.0.jar:/opt/apache-storm-0.10.0/lib/kryo-2.21.jar:/opt/apache-storm-0.10.0/lib/reflectasm-1.07-shaded.jar:/opt/apache-storm-0.10.0/lib/minlog-1.2.jar:/opt/apache-storm-0.10.0/conf:/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/stormjar.jar backtype.storm.daemon.worker big-storm-job-1-1487739502 8fde2226-4b32-406d-8809-81ed88e5ae1f 6700 e8e03e95-1fcc-492a-b5e4-51ef7b8db2ee
Gerade falls es für jemanden nützlich ist dies ein besseres Bild von meiner Umgebung zu bekommen versucht, zu lesen, hier ist meine Docker Konfiguration für Storm (und andere Sachen). Hoffentlich hilft das.
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zk
hostname: zk
ports:
- "2181:2181"
networks:
storm:
kafka:
image: wurstmeister/kafka:0.8.2.2-1
container_name: kafka
hostname: kafka
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_HOST_NAME: 10.211.55.4
KAFKA_ZOOKEEPER_CONNECT: 10.211.55.4
volumes:
- /var/run/docker.sock:/var/run/docker.sock
nimbus:
image: sunside/storm-nimbus
container_name: storm-nimbus
hostname: storm-nimbus
ports:
- "49773:49772"
- "49772:49773"
- "49627:49627"
environment:
- "LOCAL_HOSTNAME=nimbus"
- "ZOOKEEPER_ADDRESS=zk"
- "ZOOKEEPER_PORT=2181"
- "NIMBUS_ADDRESS=nimbus"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
volumes:
- /media/psf/Home/dev/storm-pipeline:/pipeline
networks:
storm:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=supervisor"
- "NIMBUS_ADDRESS=nimbus"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=zk"
- "ZOOKEEPER_PORT=2181"
networks:
storm:
ui:
image: sunside/storm-ui
container_name: storm-ui
hostname: storm-ui
ports:
- "8888:8080"
environment:
- "LOCAL_HOSTNAME=ui"
- "NIMBUS_ADDRESS=nimbus"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DPRCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=zk"
- "ZOOKEEPER_PORT=2181"
networks:
storm:
elasticsearch:
image: elasticsearch:2.3
container_name: elasticsearch
hostname: elasticsearch
ports:
- "9200:9200"
networks:
storm:
networks:
storm:
external: true
Dank Deepna Bains von Horton arbeitet Unterstützung für die Feststellung der Ursache für die Diskrepanz zwischen dem, was erwartet wurde und was wir bekommen haben. –