2017-02-18 7 views
0

I umgeschult Gründung-v3-Modell in Tensroflow android Demo-Anwendung zu verwenden, ich versuche, keine Ausgabe gezeigt.Tensorflow Android Demo: Keine Ausgabe

Was ich

Trainded das Modell pro Beschreibung retrain inception getan haben. Nach dem Training (nur fünf Klassen) ich die Grafik verwendet getestet

bazel build tensorflow/examples/label_image:label_image && 
bazel-bin/tensorflow/examples/label_image/label_image \ 
--output_layer=final_result \ 
--labels=/tf_files/retrained_labels.txt \ 
--image=/home/hannan/Desktop/images.jpg \ 
--graph=/tf_files/retrained_graph.pb 

und im Anschluss an den Ausgängen

I tensorflow/examples/label_image/main.cc:206] shoes (3): 0.997833 
I tensorflow/examples/label_image/main.cc:206] chair (1): 0.00118802 
I tensorflow/examples/label_image/main.cc:206] door lock (2): 0.000544737 
I tensorflow/examples/label_image/main.cc:206] bench (4): 0.000354453 
I tensorflow/examples/label_image/main.cc:206] person (0): 7.93592e-05 

die Optimierung für Inferenz

bazel build tensorflow/python/tools:optimize_for_inference 
bazel-bin/tensorflow/python/tools/optimize_for_inference \ 
--input=/tf_files/retrained_graph.pb \ 
--output=/tf_files/optimized_graph.pb \ 
--input_names=Mul \ 
--output_names=final_result 

mit Fertig und der Ausgabe Graph getestet funktioniert wieder gut.

lief finaly die folgende strip_unsued.py

python strip_unused.py \ 
--input_graph=/tf_files/optimized_graph.pb \ 
--output_graph=/tf_files/stirpped_graph.pb\ 
--input_node_names="Mul" \ 
--output_node_names="final_result" \ 
--input_binary=true 

die Grafik wieder getestet funktioniert gut.

Android App chnages in Classifier Aktivität

private static final int NUM_CLASSES = 5; 
private static final int INPUT_SIZE = 229; 
private static final int IMAGE_MEAN = 128; 
private static final float IMAGE_STD = 128; 
private static final String INPUT_NAME = "Mul:0"; 
private static final String OUTPUT_NAME = "final_result:0"; 
private static final String MODEL_FILE"file:///android_asset/optimized_graph.pb"; 
private static final String LABEL_FILE = "file:///android_asset/retrained_labels.txt"; 

Build-& Das Projekt lief.

Traceback

D/tensorflow: CameraActivity: onCreate [email protected] 
W/ResourceType: For resource 0x0103045b, entry index(1115) is beyond type entryCount(1) 
W/ResourceType: For resource 0x01030249, entry index(585) is beyond type entryCount(1) 
W/ResourceType: For resource 0x01030249, entry index(585) is beyond type entryCount(1) 
W/ResourceType: For resource 0x01030248, entry index(584) is beyond type entryCount(1) 
W/ResourceType: For resource 0x01030247, entry index(583) is beyond type entryCount(1) 
D/PhoneWindowEx: [PWEx][generateLayout] setLGNavigationBarColor : colors=0xff000000 
I/PhoneWindow: [setLGNavigationBarColor] color=0x ff000000 
D/tensorflow: CameraActivity: onStart [email protected] 
D/tensorflow: CameraActivity: onResume [email protected] 
D/OpenGLRenderer: Use EGL_SWAP_BEHAVIOR_PRESERVED: false 
D/PhoneWindow: notifyNavigationBarColor, color=0x: ff000000, token: [email protected] 
I/OpenGLRenderer: Initialized EGL, version 1.4 
I/CameraManagerGlobal: Connecting to camera service 
I/tensorflow: CameraConnectionFragment: Adding size: 1920x1440 
I/tensorflow: CameraConnectionFragment: Adding size: 1920x1088 
I/tensorflow: CameraConnectionFragment: Adding size: 1920x1080 
I/tensorflow: CameraConnectionFragment: Adding size: 1280x720 
I/tensorflow: CameraConnectionFragment: Adding size: 960x720 
I/tensorflow: CameraConnectionFragment: Adding size: 960x540 
I/tensorflow: CameraConnectionFragment: Adding size: 800x600 
I/tensorflow: CameraConnectionFragment: Adding size: 864x480 
I/tensorflow: CameraConnectionFragment: Adding size: 800x480 
I/tensorflow: CameraConnectionFragment: Adding size: 720x480 
I/tensorflow: CameraConnectionFragment: Adding size: 640x480 
I/tensorflow: CameraConnectionFragment: Adding size: 480x368 
I/tensorflow: CameraConnectionFragment: Adding size: 480x320 
I/tensorflow: CameraConnectionFragment: Not adding size: 352x288 
I/tensorflow: CameraConnectionFragment: Not adding size: 320x240 
I/tensorflow: CameraConnectionFragment: Not adding size: 176x144 
I/tensorflow: CameraConnectionFragment: Chosen size: 480x320 
I/TensorFlowImageClassifier: Reading labels from: retrained_labels.txt 
I/TensorFlowImageClassifier: Read 5, 5 specified 
I/native: tensorflow_inference_jni.cc:97 Native TF methods loaded. 
I/TensorFlowInferenceInterface: Native methods already loaded. 
I/native: tensorflow_inference_jni.cc:85 Creating new session variables for 7e135ad551738da4 
I/native: tensorflow_inference_jni.cc:113 Loading Tensorflow. 
I/native: tensorflow_inference_jni.cc:120 Session created. 
I/native: tensorflow_inference_jni.cc:126 Acquired AssetManager. 
I/native: tensorflow_inference_jni.cc:128 Reading file to proto: file:///android_asset/optimized_graph.pb 
I/native: tensorflow_inference_jni.cc:132 GraphDef loaded from file:///android_asset/optimized_graph.pb with 515 nodes. 
I/native: stat_summarizer.cc:38 StatSummarizer found 515 nodes 
I/native: tensorflow_inference_jni.cc:139 Creating TensorFlow graph from GraphDef. 
I/native: tensorflow_inference_jni.cc:151 Initialization done in 931.7ms 
I/tensorflow: ClassifierActivity: Sensor orientation: 90, Screen orientation: 0 
I/tensorflow: ClassifierActivity: Initializing at size 480x320 
I/CameraManager: Using legacy camera HAL. 
I/tensorflow: CameraConnectionFragment: Opening camera preview: 480x320 
I/CameraDeviceState: Legacy camera service transitioning to state CONFIGURING 
I/RequestThread-0: Configure outputs: 2 surfaces configured. 
D/Camera: app passed NULL surface 
I/[MALI][Gralloc]: dlopen libsec_mem.so fail 
I/Choreographer: Skipped 89 frames! The application may be doing too much work on its main thread. 
I/Timeline: Timeline: Activity_idle id: [email protected] time:114073819 
I/CameraDeviceState: Legacy camera service transitioning to state IDLE 
I/RequestQueue: Repeating capture request set. 
W/LegacyRequestMapper: convertRequestMetadata - control.awbRegions setting is not supported, ignoring value 
W/LegacyRequestMapper: Only received metering rectangles with weight 0. 
W/LegacyRequestMapper: Only received metering rectangles with weight 0. 
E/Camera: Unknown message type -2147483648 
I/CameraDeviceState: Legacy camera service transitioning to state CAPTURING 
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value 
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value 
D/tensorflow: CameraActivity: Initializing buffer 0 at size 153600 
D/tensorflow: CameraActivity: Initializing buffer 1 at size 38400 
D/tensorflow: CameraActivity: Initializing buffer 2 at size 38400 
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value 
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value 

Wenn ich die Anwendung verwenden, um ein Objekt kein Ausgang gezeigt zu identifizieren. Output

dieses zeigen, ist auch in Protokollen

I/native: tensorflow_inference_jni.cc:228 End computing. Ran in 4639ms (4639ms avg over 1 runs) 
E/native: tensorflow_inference_jni.cc:233 Error during inference: Invalid argument: computed output size would be negative 
     [[Node: pool_3 = AvgPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 8, 8, 1], padding="VALID", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](mixed_10/join)]] 
E/native: tensorflow_inference_jni.cc:170 Output [final_result] not found, aborting! 
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value 

Antwort

1

ich es herausgefunden haben out.There ist Tippfehler Fehler in ClassifierActivity.

private static final int INPUT_SIZE = 229; 

sollte

private static final int INPUT_SIZE = 299;