2016-03-21 5 views
3

Ich habe spark-1.6.1-bin-hadoop2.6.tgz auf einem Hadoop-Cluster mit 15 Knoten installiert. Alle Knoten laufen Java 1.8.0_72 und die neueste Version von Hadoop. Der Hadoop-Cluster selbst ist funktional, z.B. YARN kann verschiedene MapReduce-Jobs erfolgreich ausführen.Wie wird Spark-Shell mit YARN im Client-Modus ausgeführt?

Ich kann Spark Shell lokal auf einem Knoten ohne Probleme mit dem folgenden Befehl ausführen: $SPARK_HOME/bin/spark-shell.

Ich kann auch einige Spark-Beispiele erfolgreich ausführen, z. B. SparkPi mit YARN und Cluster-Modus.

Aber wenn ich versuche Spark-Shell auf GARN mit deploy Modus client, ich Probleme beim auszuführen:

[email protected]:~$ $SPARK_HOME/bin/spark-shell --master yarn --deploy-mode client 
16/03/21 15:15:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
... 
Welcome to 
     ____    __ 
    /__/__ ___ _____/ /__ 
    _\ \/ _ \/ _ `/ __/ '_/ 
    /___/ .__/\_,_/_/ /_/\_\ version 1.6.1 
     /_/ 

Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_72) 
Type in expressions to have them evaluated. 
Type :help for more information. 
... 
16/03/21 15:15:24 INFO MemoryStore: MemoryStore started with capacity 511.1 MB 
16/03/21 15:15:24 INFO SparkEnv: Registering OutputCommitCoordinator 
16/03/21 15:15:24 INFO Utils: Successfully started service 'SparkUI' on port 4040. 
16/03/21 15:15:24 INFO SparkUI: Started SparkUI at http://10.108.57.32:4040 
16/03/21 15:15:24 INFO RMProxy: Connecting to ResourceManager at hadoop2/10.108.57.32:8032 
16/03/21 15:15:24 INFO Client: Requesting a new application from cluster with 13 NodeManagers 
16/03/21 15:15:25 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (131072 MB per container) 
16/03/21 15:15:25 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead 
16/03/21 15:15:25 INFO Client: Setting up container launch context for our AM 
16/03/21 15:15:25 INFO Client: Setting up the launch environment for our AM container 
16/03/21 15:15:25 INFO Client: Preparing resources for our AM container 
16/03/21 15:15:25 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 
16/03/21 15:15:25 INFO Client: Uploading resource file:/opt/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar -> hdfs://hadoop1:9000/user/hadoopu/.sparkStaging/application_1458568053208_0006/spark-assembly-1.6.1-hadoop2.6.0.jar 
16/03/21 15:15:28 INFO Client: Uploading resource file:/tmp/spark-c9077c60-b379-439e-aeb4-85948df70df5/__spark_conf__7479505398141092205.zip -> hdfs://hadoop1:9000/user/hadoopu/.sparkStaging/application_1458568053208_0006/__spark_conf__7479505398141092205.zip 
16/03/21 15:15:28 INFO SecurityManager: Changing view acls to: hadoopu 
16/03/21 15:15:28 INFO SecurityManager: Changing modify acls to: hadoopu 
16/03/21 15:15:28 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoopu); users with modify permissions: Set(hadoopu) 
16/03/21 15:15:28 INFO Client: Submitting application 6 to ResourceManager 
16/03/21 15:15:28 INFO YarnClientImpl: Submitted application application_1458568053208_0006 
16/03/21 15:15:29 INFO Client: Application report for application_1458568053208_0006 (state: ACCEPTED) 
16/03/21 15:15:29 INFO Client: 
    client token: N/A 
    diagnostics: AM container is launched, waiting for AM container to Register with RM 
    ApplicationMaster host: N/A 
    ApplicationMaster RPC port: -1 
    queue: default 
    start time: 1458569728506 
    final status: UNDEFINED 
    tracking URL: http://hadoop2:8088/proxy/application_1458568053208_0006/ 
    user: hadoopu 
16/03/21 15:15:30 INFO Client: Application report for application_1458568053208_0006 (state: ACCEPTED) 
16/03/21 15:15:31 INFO Client: Application report for application_1458568053208_0006 (state: ACCEPTED) 
16/03/21 15:15:32 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null) 
16/03/21 15:15:32 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> hadoop2, PROXY_URI_BASES -> http://hadoop2:8088/proxy/application_1458568053208_0006), /proxy/application_1458568053208_0006 
16/03/21 15:15:32 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter 
16/03/21 15:15:32 INFO Client: Application report for application_1458568053208_0006 (state: RUNNING) 
16/03/21 15:15:32 INFO Client: 
    client token: N/A 
    diagnostics: N/A 
    ApplicationMaster host: 10.108.57.41 
    ApplicationMaster RPC port: 0 
    queue: default 
    start time: 1458569728506 
    final status: UNDEFINED 
    tracking URL: http://hadoop2:8088/proxy/application_1458568053208_0006/ 
    user: hadoopu 
16/03/21 15:15:32 INFO YarnClientSchedulerBackend: Application application_1458568053208_0006 has started running. 
16/03/21 15:15:32 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50170. 
16/03/21 15:15:32 INFO NettyBlockTransferService: Server created on 50170 
16/03/21 15:15:32 INFO BlockManagerMaster: Trying to register BlockManager 
16/03/21 15:15:32 INFO BlockManagerMasterEndpoint: Registering block manager 10.108.57.32:50170 with 511.1 MB RAM, BlockManagerId(driver, 10.108.57.32, 50170) 
16/03/21 15:15:32 INFO BlockManagerMaster: Registered BlockManager 
16/03/21 15:15:37 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null) 
16/03/21 15:15:37 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> hadoop2, PROXY_URI_BASES -> http://hadoop2:8088/proxy/application_1458568053208_0006), /proxy/application_1458568053208_0006 
16/03/21 15:15:37 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter 
16/03/21 15:15:39 ERROR YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED! 
16/03/21 15:15:39 INFO SparkUI: Stopped Spark web UI at http://10.108.57.32:4040 
16/03/21 15:15:39 INFO YarnClientSchedulerBackend: Shutting down all executors 
16/03/21 15:15:39 INFO YarnClientSchedulerBackend: Asking each executor to shut down 
16/03/21 15:15:39 INFO YarnClientSchedulerBackend: Stopped 
16/03/21 15:15:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 
16/03/21 15:15:39 INFO MemoryStore: MemoryStore cleared 
16/03/21 15:15:39 INFO BlockManager: BlockManager stopped 
16/03/21 15:15:39 INFO BlockManagerMaster: BlockManagerMaster stopped 
16/03/21 15:15:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 
16/03/21 15:15:39 INFO SparkContext: Successfully stopped SparkContext 
16/03/21 15:15:39 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. 
16/03/21 15:15:39 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. 
16/03/21 15:15:39 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. 
16/03/21 15:15:54 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms) 
16/03/21 15:15:54 ERROR SparkContext: Error initializing SparkContext. 
java.lang.NullPointerException 
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:584) 
    at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017) 
    at $line3.$read$$iwC$$iwC.<init>(<console>:15) 
    at $line3.$read$$iwC.<init>(<console>:24) 
    at $line3.$read.<init>(<console>:26) 
    at $line3.$read$.<init>(<console>:30) 
    at $line3.$read$.<clinit>(<console>) 
    at $line3.$eval$.<init>(<console>:7) 
    at $line3.$eval$.<clinit>(<console>) 
    at $line3.$eval.$print(<console>) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) 
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) 
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) 
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) 
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) 
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) 
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125) 
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124) 
    at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324) 
    at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124) 
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974) 
    at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159) 
    at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64) 
    at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108) 
    at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) 
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) 
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) 
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) 
    at org.apache.spark.repl.Main$.main(Main.scala:31) 
    at org.apache.spark.repl.Main.main(Main.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) 
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
16/03/21 15:15:54 INFO SparkContext: SparkContext already stopped. 
java.lang.NullPointerException 
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:584) 
    at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017) 
    at $iwC$$iwC.<init>(<console>:15) 
    at $iwC.<init>(<console>:24) 
    at <init>(<console>:26) 
    at .<init>(<console>:30) 
    at .<clinit>(<console>) 
    at .<init>(<console>:7) 
    at .<clinit>(<console>) 
    at $print(<console>) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) 
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) 
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) 
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) 
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) 
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) 
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125) 
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124) 
    at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324) 
    at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124) 
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974) 
    at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159) 
    at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64) 
    at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108) 
    at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) 
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) 
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) 
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) 
    at org.apache.spark.repl.Main$.main(Main.scala:31) 
    at org.apache.spark.repl.Main.main(Main.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) 
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

java.lang.NullPointerException 
    at org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1367) 
    at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
    at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028) 
    at $iwC$$iwC.<init>(<console>:15) 
    at $iwC.<init>(<console>:24) 
    at <init>(<console>:26) 
    at .<init>(<console>:30) 
    at .<clinit>(<console>) 
    at .<init>(<console>:7) 
    at .<clinit>(<console>) 
    at $print(<console>) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) 
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) 
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) 
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) 
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) 
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) 
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132) 
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124) 
    at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324) 
    at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124) 
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974) 
    at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159) 
    at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64) 
    at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108) 
    at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) 
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) 
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) 
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) 
    at org.apache.spark.repl.Main$.main(Main.scala:31) 
    at org.apache.spark.repl.Main.main(Main.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) 
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

<console>:16: error: not found: value sqlContext 
     import sqlContext.implicits._ 
       ^
<console>:16: error: not found: value sqlContext 
     import sqlContext.sql 
       ^

scala> 

scala> sc 
<console>:20: error: not found: value sc 
       sc 
      ^

scala> 

Ich habe auch auf das GARN Web UI geht, fand die Spark-Shell in der Liste der FERTIG Anwendungen, dann klickte auf die Anwendung, um die Protokolle zu sehen. Ich habe zwei Knoten mit stderr Protokolle gefunden:

SLF4J: Class path contains multiple SLF4J bindings. 
SLF4J: Found binding in [jar:file:/mnt/ssd1/tmp/nm-local-dir/usercache/hadoopu/filecache/13/spark-assembly-1.6.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: Found binding in [jar:file:/opt/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 
16/03/21 15:07:20 INFO ApplicationMaster: Registered signal handlers for [TERM, HUP, INT] 
16/03/21 15:07:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
16/03/21 15:07:21 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1458568053208_0005_000002 
16/03/21 15:07:22 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 
16/03/21 15:07:22 INFO SecurityManager: Changing view acls to: hadoopu 
16/03/21 15:07:22 INFO SecurityManager: Changing modify acls to: hadoopu 
16/03/21 15:07:22 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoopu); users with modify permissions: Set(hadoopu) 
16/03/21 15:07:22 INFO ApplicationMaster: Waiting for Spark driver to be reachable. 
16/03/21 15:07:22 INFO ApplicationMaster: Driver now available: 10.108.57.32:39824 
16/03/21 15:07:22 INFO ApplicationMaster$AMEndpoint: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> hadoop2, PROXY_URI_BASES -> http://hadoop2:8088/proxy/application_1458568053208_0005),/proxy/application_1458568053208_0005) 
16/03/21 15:07:22 INFO RMProxy: Connecting to ResourceManager at hadoop2/10.108.57.32:8030 
16/03/21 15:07:22 INFO YarnRMClient: Registering the ApplicationMaster 
16/03/21 15:07:22 INFO YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead 
16/03/21 15:07:22 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>) 
16/03/21 15:07:22 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>) 
16/03/21 15:07:22 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals 
16/03/21 15:07:23 INFO AMRMClientImpl: Received new token for : hadoop14:32420 
16/03/21 15:07:23 INFO AMRMClientImpl: Received new token for : hadoop3:35904 
16/03/21 15:07:23 INFO YarnAllocator: Launching container container_1458568053208_0005_02_000002 for on host hadoop14 
16/03/21 15:07:23 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://[email protected]:39824, executorHostname: hadoop14 
16/03/21 15:07:23 INFO YarnAllocator: Launching container container_1458568053208_0005_02_000003 for on host hadoop3 
16/03/21 15:07:23 INFO ExecutorRunnable: Starting Executor Container 
16/03/21 15:07:23 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://[email protected]:39824, executorHostname: hadoop3 
16/03/21 15:07:23 INFO ExecutorRunnable: Starting Executor Container 
16/03/21 15:07:23 INFO YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them. 
16/03/21 15:07:23 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 
16/03/21 15:07:23 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 
16/03/21 15:07:23 INFO ExecutorRunnable: Setting up ContainerLaunchContext 
16/03/21 15:07:23 INFO ExecutorRunnable: Setting up ContainerLaunchContext 
16/03/21 15:07:23 INFO ExecutorRunnable: Preparing Local resources 
16/03/21 15:07:23 INFO ExecutorRunnable: Preparing Local resources 
16/03/21 15:07:23 INFO ExecutorRunnable: Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "hadoop1" port: 9000 file: "/user/hadoopu/.sparkStaging/application_1458568053208_0005/spark-assembly-1.6.1-hadoop2.6.0.jar" } size: 187698038 timestamp: 1458569230874 type: FILE visibility: PRIVATE) 
16/03/21 15:07:23 INFO ExecutorRunnable: Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "hadoop1" port: 9000 file: "/user/hadoopu/.sparkStaging/application_1458568053208_0005/spark-assembly-1.6.1-hadoop2.6.0.jar" } size: 187698038 timestamp: 1458569230874 type: FILE visibility: PRIVATE) 
16/03/21 15:07:23 INFO ExecutorRunnable: 
=============================================================================== 
YARN executor launch context: 
    env: 
    CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_PREFIX/share/hadoop/tools/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* 
    SPARK_LOG_URL_STDERR -> http://hadoop3:8042/node/containerlogs/container_1458568053208_0005_02_000003/hadoopu/stderr?start=-4096 
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1458568053208_0005 
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038 
    SPARK_USER -> hadoopu 
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE 
    SPARK_YARN_MODE -> true 
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1458569230874 
    SPARK_LOG_URL_STDOUT -> http://hadoop3:8042/node/containerlogs/container_1458568053208_0005_02_000003/hadoopu/stdout?start=-4096 
    SPARK_YARN_CACHE_FILES -> hdfs://hadoop1:9000/user/hadoopu/.sparkStaging/application_1458568053208_0005/spark-assembly-1.6.1-hadoop2.6.0.jar#__spark__.jar 

    command: 
    {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=39824' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:39824 --executor-id 2 --hostname hadoop3 --cores 1 --app-id application_1458568053208_0005 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr 
=============================================================================== 

16/03/21 15:07:23 INFO ExecutorRunnable: 
=============================================================================== 
YARN executor launch context: 
    env: 
    CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_PREFIX/share/hadoop/tools/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* 
    SPARK_LOG_URL_STDERR -> http://hadoop14:8042/node/containerlogs/container_1458568053208_0005_02_000002/hadoopu/stderr?start=-4096 
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1458568053208_0005 
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038 
    SPARK_USER -> hadoopu 
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE 
    SPARK_YARN_MODE -> true 
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1458569230874 
    SPARK_LOG_URL_STDOUT -> http://hadoop14:8042/node/containerlogs/container_1458568053208_0005_02_000002/hadoopu/stdout?start=-4096 
    SPARK_YARN_CACHE_FILES -> hdfs://hadoop1:9000/user/hadoopu/.sparkStaging/application_1458568053208_0005/spark-assembly-1.6.1-hadoop2.6.0.jar#__spark__.jar 

    command: 
    {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=39824' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:39824 --executor-id 1 --hostname hadoop14 --cores 1 --app-id application_1458568053208_0005 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr 
=============================================================================== 
... 
16/03/21 15:07:25 ERROR ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM 
16/03/21 15:07:25 INFO ApplicationMaster: Final app status: UNDEFINED, exitCode: 0, (reason: Shutdown hook called before final status was reported.) 
16/03/21 15:07:25 INFO ApplicationMaster: Unregistering ApplicationMaster with UNDEFINED (diag message: Shutdown hook called before final status was reported.) 
16/03/21 15:07:25 INFO AMRMClientImpl: Waiting for application to be successfully unregistered. 
16/03/21 15:07:25 INFO ApplicationMaster: Deleting staging directory .sparkStaging/application_1458568053208_0005 
16/03/21 15:07:25 INFO ShutdownHookManager: Shutdown hook called 

Irgendwelche Ideen, warum ich nicht Shell laufen kann mit Client-Modus auf GARN Funken?

+0

Ihre Executoren haben keinen Speicher mehr. Sie sollten in der Lage sein, Argumente für Executor-Speicher und Arbeitsspeicher auf Spark-Shell bereitzustellen. –

+0

Ich sehe keine Probleme in Bezug auf Speicher in den Protokollen. Können Sie auf die Zeilen zeigen, die einen Speicherfehler anzeigen? In der Zwischenzeit habe ich auch 'Spark-Shell - Master-Garn versucht - Deploy-Modus-Client - Treiber-Speicher 512m - Executor-Speicher 512m', nur um ein ähnliches, problematisches Ergebnis zu bekommen. Die Knoten haben 128 GB RAM, und ich kann keine Speichermeldungsnachrichten sehen. –

+1

Ich denke, Sie zeigen Protokolle aus zwei verschiedenen Anwendungen, d. H. Die Spark-Shell-Protokolle sind für 'application_1458568053208_0006' während von YARN für' appattempt_1458568053208_0005_000002', die ich glaube, ist für die frühere (wahrscheinlich erfolgreich) Ausführung. –

Antwort

0

Ich hatte das gleiche Problem. Es stellte sich heraus, dass es sich um eine Firewall zwischen meinem Login-Knoten und dem Cluster handelte: Der Cluster versuchte, sich an einem zufälligen Port, der blockiert war, wieder mit dem Login-Knoten zu verbinden. Entfernen Sie entweder die Firewall-Regeln oder verschieben Sie Ihre Shell auf einen der Knoten des Clusters, auf dem keine Firewall-Regeln den Zugriff blockieren.

Verwandte Themen