2017-03-25 3 views
0

Ich verwendete HVPI, eine Open-Source-Hadoop-Schnittstelle, um ein Video mit Hadoop und MapReduce (vollständig verteilter Modus) zu verarbeiten. Ich teile das Video in Frames auf und möchte mit diesen Frames ein neues Video mit der Xuggler API erstellen.Hadoop benutzerdefinierte Ausgabe RecordWriter-Fehler

Die Map-Phase ist in Ordnung, aber die Reduzierungsphase hat java.lang.RuntimeException: error Operation not allowed verursacht. Das ist, weil ich versuche, ein neues Video im Hauptknotenverzeichnis zu machen, und ich weiß wirklich nicht, wie es auf HDFS tut.

17/03/25 08:07:12 INFO client.RMProxy: Connecting to ResourceManager at evoido/192.168.25.11:8032 
17/03/25 08:07:13 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 
17/03/25 08:07:13 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 
17/03/25 08:29:50 INFO input.FileInputFormat: Total input paths to process : 1 
17/03/25 08:29:51 INFO mapreduce.JobSubmitter: number of splits:1 
17/03/25 08:29:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1490439401793_0001 
17/03/25 08:29:52 INFO impl.YarnClientImpl: Submitted application application_1490439401793_0001 
17/03/25 08:29:52 INFO mapreduce.Job: The url to track the job: http://evoido:8088/proxy/application_1490439401793_0001/ 
17/03/25 08:29:52 INFO mapreduce.Job: Running job: job_1490439401793_0001 
17/03/25 08:30:28 INFO mapreduce.Job: Job job_1490439401793_0001 running in uber mode : false 
17/03/25 08:30:28 INFO mapreduce.Job: map 0% reduce 0% 
17/03/25 08:30:52 INFO mapreduce.Job: map 100% reduce 0% 
17/03/25 08:30:52 INFO mapreduce.Job: Task Id : attempt_1490439401793_0001_m_000000_0, Status : FAILED 
17/03/25 08:30:54 INFO mapreduce.Job: map 0% reduce 0% 
17/03/25 08:37:40 INFO mapreduce.Job: map 68% reduce 0% 
17/03/25 08:37:43 INFO mapreduce.Job: map 69% reduce 0% 
17/03/25 08:37:52 INFO mapreduce.Job: map 73% reduce 0% 
17/03/25 08:38:30 INFO mapreduce.Job: map 82% reduce 0% 
17/03/25 08:39:26 INFO mapreduce.Job: map 100% reduce 0% 
17/03/25 08:40:36 INFO mapreduce.Job: map 100% reduce 67% 
17/03/25 08:40:39 INFO mapreduce.Job: Task Id : attempt_1490439401793_0001_r_000000_0, Status : FAILED 
Error: java.lang.RuntimeException: error Operação não permitida, failed to write trailer to /home/idobrt/Vídeos/Result/ 
     at com.xuggle.mediatool.MediaWriter.close(MediaWriter.java:1306) 
     at ads.ifba.edu.tcc.util.MediaWriter.close(MediaWriter.java:97) 
     at edu.bupt.videodatacenter.input.VideoRecordWriter.close(VideoRecordWriter.java:61) 
     at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550) 
     at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629) 
     at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) 
     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:422) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) 
     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 

Dies ist meine VideoRecordWriter Umsetzung:

public class VideoRecordWriter extends RecordWriter<Text, ImageWritable>{ 

    private FileSystem fs; 


    @Override 
    public void close(TaskAttemptContext job) throws IOException, InterruptedException { 

     // TODO Auto-generated method stub 
     Path outputPath = new Path(job.getConfiguration().get("mapred.output.dir")); 
     Configuration conf = job.getConfiguration(); 
     fs = outputPath.getFileSystem(conf); 

     MediaWriter.initialize().close(); 
     //fs.copyFromLocalFile(new Path(MediaWriter.initialize().getVideoPath()), outputPath); 
     fs.close(); 

    } 

    @Override 
    public void write(Text key,ImageWritable img) throws IOException, InterruptedException { 
     // TODO Auto-generated method stub 
     //System.out.println("Key value: "+key.toString()); 

     MediaWriter.initialize().setDimentions(img.getBufferedImage()); 
     MediaWriter.initialize().creaVideoContainer();   
     MediaWriter.initialize().create(img.getBufferedImage()); 


    } 




} 


    public class MediaWriter{ 


     private MediaWriter(){ 

     } 

     public static MediaWriter initialize() throws IOException{ 

      if(instance == null){ 
       instance = new MediaWriter(); 

       /* 
       fs = FileSystem.get(new Configuration()); 
       outputStream = fs.create(new Path("hdfs://evoido:9000/video/teste.mp4")); 
       containerFormat = IContainerFormat.make(); 
       containerFormat.setOutputFormat("mpeg4", null, "video/ogg"); 

       writer.getContainer().setFormat(containerFormat); 
       writer = ToolFactory.makeWriter(XugglerIO.map(outputStream)); 
       */ 

      } 
      return instance; 
     } 

     public void setDimentions(BufferedImage img){ 

      if((WIDTH==0)&&(HEIGHT==0)){ 
      WIDTH = img.getWidth(); 
      HEIGHT = img.getHeight(); 
      } 
     } 

     public void setFileName(Text key){ 

      if(fileName==null){ 
      fileName = key.toString(); 
      VIDEO_NAME += fileName.substring(0, (fileName.lastIndexOf("_")-4))+".mp4"; 
      } 
     } 

     public void creaVideoContainer() throws IOException{ 

      if(writer ==null){ 
      writer = ToolFactory.makeWriter(VIDEO_NAME); 
       /* 
       fs = FileSystem.get(new Configuration()); 
       outputStream = fs.create(new Path("hdfs://evoido:9000/video/teste.mp4")); 
       containerFormat = IContainerFormat.make(); 
       containerFormat.setOutputFormat("mpeg4", null, "video/ogg"); 
       */ 
      writer.getContainer().setFormat(containerFormat); 

      writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_MPEG4,WIDTH,HEIGHT); 

      } 
     } 
     public void create(BufferedImage img) { 
      // TODO Auto-generated method stub 
      //precisamos descobrir como setar o timeStamp corretamente 
      if(offset == 0){ 
       offset = calcTimeStamp(); 


      } 


      writer.encodeVideo(0,img,timeStamp, TimeUnit.NANOSECONDS); 
      timeStamp+=offset; 


     } 


     public void close() { 
      // TODO Auto-generated method stub 
      writer.close(); 
     } 

     public String getVideoPath(){ 
      return VIDEO_NAME; 
     } 
     public void setTime(long interval){ 
      time+= interval; 
     } 


     public void setQtdFrame(long frameNum){ 
      qtdFrame = frameNum; 
     } 
     /* 

     * */ 
     public long calcTimeStamp(){ 

      double interval = 0.0; 
      double timeLong = Math.round(time/CONST); 
      double result = (time/(double)qtdFrame)*1000.0; 
      /* 

      */ 
      if((timeLong > 3600)&&((time % qtdFrame)!=0)){ 
       interval = 1000.0; 
       double overplus = timeLong/3600.0; 
       if(overplus >=2){ 
        interval*=overplus; 
       } 
       result+=interval; 
      } 

      return (long)Math.round(result); 
     } 

     public void setFramerate(double frameR){ 
      if(frameRate == 0){ 
       frameRate = frameR; 
      } 
     } 


     private static IMediaWriter writer; 
     private static long nextFrameTime = 0; 
     private static FileSystem fs; 
     private static OutputStream outputStream; 
     private static MediaWriter instance; 
     private static IContainerFormat containerFormat; 
     private static String VIDEO_NAME = "/home/idobrt/Vídeos/Result/"; 
     private static int WIDTH =0; 
     private static int HEIGHT= 0; 
     private static String fileName = null; 
     private static long timeStamp = 0; 
     private static double time = 0; 
     private static long qtdFrame = 0; 
     private static long offset = 0; 
     private static long startTime = 0; 
     private static double frameRate = 0; 
     private static double CONST = 1000000.0; 
     private static double INTERVAL = 1000.0; 
    } 

Das Problem ist nur die writer = ToolFactory.makeWriter(VIDEO_NAME); weil VIDEO_NAME ein NameNode lokales Verzeichnis ist. Gibt es jemanden, der den richtigen Weg kennt? Ich denke, der richtige Weg ist, die Datei auf HDFS zu schreiben. Wenn der Job in jobLocalRunner läuft, wird es funktionieren, aber ich werde die Parallelität verlieren.

Antwort

0

Für jetzt speichern ich die Datei nur in einem Datenknoten (wo die Reduce-Fase läuft) und kopieren Sie diese Datei nach HDFS. Es ist nicht die beste Lösung, aber wird für jetzt arbeiten.

Verwandte Themen