2016-07-27 25 views
0

Ich erhalte die Ausnahme 'Verbindung abgelehnt', die den Spark-Code ausführt. Ich verwende eine Amazon AWS EC2-Instanz. Ubuntu 14.04 LTS. Hadoop ist für die Verwendung von Port 8020 konfiguriert.Spark EC2 Verbindung abgelehnt

Port 8020 ist an localhost 127.0.0.1 gebunden. Mir wurde gesagt, dass dies ein Problem ist.

Wie sollten/etc/hosts konfiguriert werden?

[email protected]:~$ netstat -atnp | grep 8020 
(Not all processes could be identified, non-owned process info 
will not be shown, you would have to be root to see it all.) 
tcp  0  0 127.0.0.1:8020   0.0.0.0:*    LISTEN  -    
tcp  0  0 127.0.0.1:38730   127.0.0.1:8020   TIME_WAIT -    
tcp  0  0 127.0.0.1:38683   127.0.0.1:8020   ESTABLISHED -    
tcp  0  0 127.0.0.1:8020   127.0.0.1:38683   ESTABLISHED -    
tcp  0  0 127.0.0.1:38732   127.0.0.1:8020   ESTABLISHED -    
tcp  0  0 127.0.0.1:8020   127.0.0.1:38732   ESTABLISHED -    
[email protected]:~$ cat /etc/hosts 
127.0.0.1 localhost 
10.0.1.215 ec2-52-8-16-250.us-west-1.compute.amazonaws.com ec2-52-8-16-250 

# The following lines are desirable for IPv6 capable hosts 
::1 ip6-localhost ip6-loopback 
fe00::0 ip6-localnet 
ff00::0 ip6-mcastprefix 
ff02::1 ip6-allnodes 
ff02::2 ip6-allrouters 
ff02::3 ip6-allhosts 
[email protected]:~$ 

Kern-site.xml:

<property> 
    <name>hadoop.tmp.dir</name> 
    <value>/usr/local/hadoop/tmp</value> 
</property> 
<property> 
<name>fs.default.name</name> 
<value>hdfs://localhost:8020</value> 
</property> 

hdfs-site.xml:

<property> 
     <name>dfs.replication</name> 
     <value>1</value> 
</property> 
<property> 
     <name>dfs.namenode.name.dir</name> 
     <value>file:/bigdata/hadoop_tmp/hdfs/namenode</value> 
</property> 
<property> 
     <name>dfs.datanode.data.dir</name> 
     <value>file:/bigdata/hadoop_tmp/hdfs/datanode</value> 
</property> 
<property> 
<name>dfs.namenode.secondary.http-address</name> 
<value>localhost:50090</value> 
</property> 


$ hadoop version 
Hadoop 2.6.4 
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 5082c73637530b0b7e115f9625ed7fac69f937e6 
Compiled by jenkins on 2016-02-12T09:45Z 
Compiled with protoc 2.5.0 
From source with checksum 8dee2286ecdbbbc930a6c87b65cbc010 
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar 
[email protected]:/home/ubuntu$ 

[email protected]:/home/ubuntu$ env | grep HADOOP 
HADOOP_HOME=/usr/local/hadoop 
HADOOP_COMMON_LIB_NATIVE_DIR=/usr/local/hadoop/lib/native 
HADOOP_HDFS_HOME=/usr/local/hadoop 
HADOOP_COMMON_HOME=/usr/local/hadoop 
HADOOP_OPTS=-Djava.library.path=/usr/local/hadoop/lib/native 
HADOOP_MAPRED_HOME=/usr/local/hadoop 

Hadoop Prozesse:

[email protected]:/home/ubuntu$ jps 
22082 DataNode 
22306 SecondaryNameNode 
22519 NodeManager 
22471 ResourceManager 
21902 NameNode 
23198 Jps 
22959 JobHistoryServer 

Z.B.

$ [email protected]:/home/ubuntu$ hadoop fs -put /hdfs/wikidump.xml /user/hduser/wikidump.xml 
$ [email protected]:/home/ubuntu$ hadoop fs -ls /user/hduser 
Found 1 items 
-rw-r--r-- 1 hduser supergroup 52927948149 2016-07-26 19:43 /user/hduser/wikidump.xml 

[email protected]:/home/ubuntu$ nc -vz localhost 8020 
Connection to localhost 8020 port [tcp/*] succeeded! 
[email protected]:/usr/local/hadoop/logs$ nmap localhost 

Starting Nmap 6.40 (http://nmap.org) at 2016-07-27 02:37 UTC 
Nmap scan report for localhost (127.0.0.1) 
Host is up (0.00060s latency). 
Not shown: 991 closed ports 
PORT  STATE SERVICE 
21/tcp open ftp 
22/tcp open ssh 
631/tcp open ipp 
5902/tcp open vnc-2 
6002/tcp open X11:2 
8031/tcp open unknown 
8042/tcp open fs-agent 
8080/tcp open http-proxy 
8088/tcp open radan-http 

Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds 
[email protected]:/usr/local/hadoop/logs$ nmap 52.8.16.250 

Starting Nmap 6.40 (http://nmap.org) at 2016-07-27 02:38 UTC 
Nmap scan report for ec2-52-8-16-250.us-west-1.compute.amazonaws.com (52.8.16.250) 
Host is up (0.00056s latency). 
Not shown: 983 filtered ports 
PORT  STATE SERVICE 
20/tcp closed ftp-data 
21/tcp open ftp 
22/tcp open ssh 
80/tcp closed http 
443/tcp closed https 
3030/tcp closed arepa-cas 
5000/tcp closed upnp 
5222/tcp closed xmpp-client 
5432/tcp closed postgresql 
5901/tcp closed vnc-1 
5902/tcp open vnc-2 
5903/tcp closed vnc-3 
5904/tcp closed unknown 
8080/tcp open http-proxy 
8081/tcp closed blackice-icecap 
8888/tcp closed sun-answerbook 
9000/tcp closed cslistener 

Nmap done: 1 IP address (1 host up) scanned in 4.46 seconds 

Nmap scan report for ip-10-0-1-215.us-west-1.compute.internal (10.0.1.215) 
Host is up (0.00051s latency). 
Not shown: 992 closed ports 
PORT  STATE SERVICE 
21/tcp open ftp 
22/tcp open ssh 
5902/tcp open vnc-2 
6002/tcp open X11:2 
8031/tcp open unknown 
8042/tcp open fs-agent 
8080/tcp open http-proxy 
8088/tcp open radan-http 

Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds 
[email protected]:/usr/local/hadoop/logs$ 

$ ~/spark/bin/spark-shell -version 

log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). 
log4j:WARN Please initialize the log4j system properly. 
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. 
Using Spark's repl log4j profile: org/apache/spark/log4j-defaults-repl.properties 
To adjust logging level use sc.setLogLevel("INFO") 
Welcome to 
     ____    __ 
    /__/__ ___ _____/ /__ 
    _\ \/ _ \/ _ `/ __/ '_/ 
    /___/ .__/\_,_/_/ /_/\_\ version 1.6.2 
     /_/ 

Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_101) 
Type in expressions to have them evaluated. 
Type :help for more information. 
16/07/27 03:00:49 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041. 
Spark context available as sc. 
SQL context available as sqlContext. 

Hier ist der volle Stack-Trace:

java.net.ConnectException: Call From ip-10-0-1-215.us-west-1.compute.internal/10.0.1.215 to 52.8.16.250:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791) 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1472) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1399) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) 
    at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy33.getFileInfo(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) 
    at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) 
    at org.apache.hadoop.fs.Globber.glob(Globber.java:252) 
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644) 
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:292) 
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264) 
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385) 
    at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.PartitionwiseSampledRDD.getPartitions(PartitionwiseSampledRDD.scala:58) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.RDD$$anonfun$zipWithUniqueId$1.apply(RDD.scala:1286) 
    at org.apache.spark.rdd.RDD$$anonfun$zipWithUniqueId$1.apply(RDD.scala:1285) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) 
    at org.apache.spark.rdd.RDD.zipWithUniqueId(RDD.scala:1285) 
    at com.cloudera.datascience.lsa.ParseWikipedia$.documentTermMatrix(ParseWikipedia.scala:48) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$RunLSA$.preprocessing(<console>:111) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:71) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:80) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:82) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:84) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:86) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:88) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:90) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:92) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:94) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:96) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:98) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:100) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:102) 
    at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:104) 
    at $iwC$$iwC$$iwC$$iwC.<init>(<console>:106) 
    at $iwC$$iwC$$iwC.<init>(<console>:108) 
    at $iwC$$iwC.<init>(<console>:110) 
    at $iwC.<init>(<console>:112) 
    at <init>(<console>:114) 
    at .<init>(<console>:118) 
    at .<clinit>(<console>) 
    at .<init>(<console>:7) 
    at .<clinit>(<console>) 
    at $print(<console>) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) 
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) 
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) 
    at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38) 
    at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:704) 
    at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:912) 
    at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:858) 
    at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:851) 
    at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94) 
    at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:383) 
    at org.apache.zeppelin.scheduler.Job.run(Job.java:176) 
    at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) 
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705) 
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1438) 
    ... 127 more 
ERROR 

%spark 
val numTerms = 50000 
    val sampleSize = 0.1 
    val (termDocMatrix, termIds, docIds, idfs) = RunLSA.preprocessing(sampleSize, numTerms, sc) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
    at scala.Option.getOrElse(Option.scala:120) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
    at org.apache.spark.rdd.RDD$$anonfun$zipWithUniqueId$1.apply(RDD.scala:1286) 
    at org.apache.spark.rdd.RDD$$anonfun$zipWithUniqueId$1.apply(RDD.scala:1285) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) 
    at org.apache.spark.rdd.RDD.zipWithUniqueId(RDD.scala:1285) 
    at com.cloudera.datascience.lsa.ParseWikipedia$.documentTermMatrix(ParseWikipedia.scala:48) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$RunLSA$.preprocessing(<console>:111) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:71) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:80) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:82) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:84) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:86) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:88) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:90) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:92) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:94) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:96) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:98) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:100) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:102) 
    at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:104) 
    at $iwC$$iwC$$iwC$$iwC.<init>(<console>:106) 
    at $iwC$$iwC$$iwC.<init>(<console>:108) 
    at $iwC$$iwC.<init>(<console>:110) 
    at $iwC.<init>(<console>:112) 
    at <init>(<console>:114) 
    at .<init>(<console>:118) 
    at .<clinit>(<console>) 
    at .<init>(<console>:7) 
    at .<clinit>(<console>) 
    at $print(<console>) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) 
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) 
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) 
    at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38) 
    at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:704) 
    at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:912) 
    at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:858) 
    at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:851) 
    at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94) 
    at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:383) 
    at org.apache.zeppelin.scheduler.Job.run(Job.java:176) 
    at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) 
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705) 
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1438) 
    ... 127 more 
+0

Warum haben Sie Dienste, die nur auf localhost hören? Deshalb sollten Sie nicht –

+0

So verbinden: fs.default.name hdfs: // localhost: 8020 sein sollte: fs.default.name hdfs: // 0.0.0.0.0:8020 dbl001

+0

Angesichts der Tatsache, dass diese Eigenschaft veraltet ist ... Nein. Sie können hier sehen, was die Standardeinstellungen und Erklärungen sind. https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/core-default.xml –

Antwort

0

bearbeiten /usr/local/hadoop/etc/hadoop/hdfs-site.xml Verwendung 0.0.0.0 anstelle von localhost das Problem gelöst: <property> <name>fs.default.name</name> <value>hdfs://0.0.0.0:8020</value> </property>

Verwandte Themen