2016-05-03 5 views
-2

Ich verwende dieses Tutorial: http://spark.apache.org/docs/latest/quick-start.htmlWas bedeutet der folgende Fehler in PySpark und wie kann er gelöst werden?

Ich habe die folgenden Zeilen versucht:

Text-Datei = sc.textFile ("README.md") textFile.count()

Unten ist der Ausgang dass ich statt dem gewünschten Ergebnis zu erhalten, 126.

Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/home/ashish/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.py", line 1004, in count 
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum() 
    File "/home/ashish/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.py", line 995, in sum 
    return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add) 
    File "/home/ashish/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.py", line 869, in fold 
    vals = self.mapPartitions(func).collect() 
    File "/home/ashish/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.py", line 771, in collect 
    port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd()) 
    File "/home/ashish/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__ 
    File "/home/ashish/spark-1.6.1-bin-hadoop2.6/python/pyspark/sql/utils.py", line 45, in deco 
    return f(*a, **kw) 
    *******File "/home/ashish/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value 
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. 
: java.net.ConnectException: Call From gangwar/127.0.1.1 to localhost:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused******* 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791) 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1472) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1399) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) 
    at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) 
    at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) 
    at org.apache.hadoop.fs.Globber.glob(Globber.java:252) 
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644) 
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:257) 
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) 
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) 
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) 
    at org.apache.spark.rdd.RDD.collect(RDD.scala:926) 
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) 
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
    at py4j.Gateway.invoke(Gateway.java:259) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:209) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705) 
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1438) 
    ... 56 more 
+h 

Antwort

0

Sie tatsächlich den Fehler selbst sollte lauten:

: java.net.ConnectException: Call From gangwar/127.0.1.1 to localhost:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused******* 

http://wiki.apache.org/hadoop/ConnectionRefused

+0

Ich kenne den Fehler (Verbindung verweigert Fehler), aber donot wissen, wie es zu lösen. Ich bin sowohl mit Ubuntu als auch mit Funken neu. Ich habe es gegoogelt, aber keine Lösung gefunden. Es wird eine große Hilfe sein, wenn jemand die Lösung für dasselbe bereitstellen kann. –

+0

Es ist buchstäblich unmöglich für jemanden, Ihnen zu sagen, was das Problem mit Sicherheit ist. Nur Sie haben Zugriff auf Ihr Netzwerk, so dass nur Sie debuggen können, was nicht richtig ist. Es könnte einer von hundert Problemen sein, es ist ein äquivalenter Fehler zu "Ich kann mich nicht mit dem Internet verbinden". Wie hier erklärt: http://wiki.apache.org/hadoop/YourNetworkYourProblem Was waren die Ergebnisse der 9 Schritte auf der Seite, die ich in der ursprünglichen Antwort verlinkt habe? – Chris

Verwandte Themen