0
Ich habe auf meinem vm ein Verzeichnis namens simple.input
. Ich versuche, meine Karte reduzieren Motoren zu laufen, die von simple.input mit dem folgenden Befehl lesenHadoop erkennt den Eingangspfad nicht
hadoop jar hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar -mapper ./fof.mapper.py -reducer fof.reducer.py -input simple.input/ -output simple.output
Dies ist die Ausgabe:
16/11/11 00:03:49 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
16/11/11 00:03:49 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
16/11/11 00:03:49 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
16/11/11 00:03:49 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/tmp/hadoop-parallel/mapred/staging/parallel2080528058/.staging/job_local2080528058_0001
16/11/11 00:03:49 ERROR streaming.StreamJob: Error Launching job : Input path does not exist: hdfs://localhost:9000/user/parallel/simple.input
Streaming Command Failed!
I simple.input
bereits
[email protected]:~$ hadoop fs -copyFromLocal simple.input/
copyFromLocal: `/simple.input/100': File exists
copyFromLocal: `/simple.input/200': File exists
copyFromLocal: `/simple.input/300': File exists
copyFromLocal: `/simple.input/400': File exists
[email protected]:~$