TLDR: Ich kann nicht herausfinden, wie neutrainierte inceptionV3 für mehrere Bildvorhersagen zu verwenden.Edit tensorflow inceptionV3 retraining-example.py für mehrere Klassifizierungen
Hallo nette Leute :) Ich habe ein paar Tage damit verbracht, viele stackoverflow Beiträge und die Dokumentation zu durchsuchen, aber ich konnte keine Antwort auf diese Frage finden. Würde mich sehr über jede Hilfe hier freuen!
ich ein tensorflow inceptionV3 Modell auf neue Bilder umgeschult haben, und es ist in der Lage, indem Sie die Anweisungen auf https://www.tensorflow.org/versions/r0.9/how_tos/image_retraining/index.html und mit den folgenden Befehlen auf neuen Bildern arbeiten:
bazel build tensorflow/examples/label_image:label_image && \
bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \
--output_layer=final_result \
--image= IMAGE_DIRECTORY_TO_CLASSIFY
Ich brauche aber mehrere zu klassifizieren Bilder (wie ein Dataset), und bin ernsthaft daran fest, wie dies zu tun ist. Ich habe das folgende Beispiel bei
https://github.com/eldor4do/Tensorflow-Examples/blob/master/retraining-example.py
, wie das umgeschult Modell verwenden gefunden, aber auch hier ist es sehr spärlich auf Einzelheiten darüber, wie es für mehrere Einstufungen zu ändern.
Aus dem, was ich aus dem MNIST-Tutorial gelernt habe, muss ich feed_dict in das sess.run() -Objekt eingeben, blieb aber dort stecken, da ich nicht verstehen konnte, wie es in diesem Kontext implementiert wird.
Jede Hilfe wird sehr geschätzt! :)
EDIT:
Styrke Skript mit einigen Modifikationen Rennen, ich habe diese
[email protected]:~/git$ python tensorflowMassPred.py I
tensorflow/stream_executor/dso_loader.cc:108] successfully opened
CUDA library libcublas.so locally I
tensorflow/stream_executor/dso_loader.cc:108] successfully opened
CUDA library libcudnn.so locally I
tensorflow/stream_executor/dso_loader.cc:108] successfully opened
CUDA library libcufft.so locally I
tensorflow/stream_executor/dso_loader.cc:108] successfully opened
CUDA library libcuda.so locally I
tensorflow/stream_executor/dso_loader.cc:108] successfully opened
CUDA library libcurand.so locally
/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py:1197:
VisibleDeprecationWarning: converting an array with ndim > 0 to an
index will result in an error in the future
result_shape.insert(dim, 1) I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:924] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero I
tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0
with properties: name: GeForce GTX 660 major: 3 minor: 0
memoryClockRate (GHz) 1.0975 pciBusID 0000:01:00.0 Total memory:
2.00GiB Free memory: 1.78GiB I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 I
tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y I
tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 660, pci
bus id: 0000:01:00.0) W tensorflow/core/framework/op_def_util.cc:332]
Op BatchNormWithGlobalNormalization is deprecated. It will cease to
work in GraphDef version 9. Use tf.nn.batch_normalization(). E
tensorflow/core/common_runtime/executor.cc:334] Executor failed to
create kernel. Invalid argument: NodeDef mentions attr 'T' not in
Op<name=MaxPool; signature=input:float -> output:float;
attr=ksize:list(int),min=4; attr=strides:list(int),min=4;
attr=padding:string,allowed=["SAME", "VALID"];
attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]>;
NodeDef: pool = MaxPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 3,
3, 1], padding="VALID", strides=[1, 2, 2, 1],
_device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency)
[[Node: pool = MaxPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 3,
3, 1], padding="VALID", strides=[1, 2, 2, 1],
_device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency)]]
Traceback (most recent call last): File
"/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py",
line 715, in _do_call
return fn(*args) File "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py",
line 697, in _run_fn
status, run_metadata) File "/home/waffle/anaconda3/lib/python3.5/contextlib.py", line 66, in
__exit__
next(self.gen) File "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/errors.py",
line 450, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors.InvalidArgumentError: NodeDef
mentions attr 'T' not in Op<name=MaxPool; signature=input:float ->
output:float; attr=ksize:list(int),min=4;
attr=strides:list(int),min=4; attr=padding:string,allowed=["SAME",
"VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC",
"NCHW"]>; NodeDef: pool = MaxPool[T=DT_FLOAT, data_format="NHWC",
ksize=[1, 3, 3, 1], padding="VALID", strides=[1, 2, 2, 1],
_device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency)
[[Node: pool = MaxPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 3,
3, 1], padding="VALID", strides=[1, 2, 2, 1],
_device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "tensorflowMassPred.py",
line 116, in <module>
run_inference_on_image() File "tensorflowMassPred.py", line 98, in run_inference_on_image
{'DecodeJpeg/contents:0': image_data}) File "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py",
line 372, in run
run_metadata_ptr) File "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py",
line 636, in _run
feed_dict_string, options, run_metadata) File "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py",
line 708, in _do_run
target_list, options, run_metadata) File "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py",
line 728, in _do_call
raise type(e)(node_def, op, message) tensorflow.python.framework.errors.InvalidArgumentError: NodeDef
mentions attr 'T' not in Op<name=MaxPool; signature=input:float ->
output:float; attr=ksize:list(int),min=4;
attr=strides:list(int),min=4; attr=padding:string,allowed=["SAME",
"VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC",
"NCHW"]>; NodeDef: pool = MaxPool[T=DT_FLOAT, data_format="NHWC",
ksize=[1, 3, 3, 1], padding="VALID", strides=[1, 2, 2, 1],
_device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency)
[[Node: pool = MaxPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 3,
3, 1], padding="VALID", strides=[1, 2, 2, 1],
_device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency)]]
Caused by op 'pool', defined at: File "tensorflowMassPred.py", line
116, in <module>
run_inference_on_image() File "tensorflowMassPred.py", line 87, in run_inference_on_image
create_graph() File "tensorflowMassPred.py", line 68, in create_graph
_ = tf.import_graph_def(graph_def, name='') File "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/importer.py",
line 274, in import_graph_def
op_def=op_def) File "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py",
line 2260, in create_op
original_op=self._default_original_op, op_def=op_def) File "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py",
line 1230, in __init__
self._traceback = _extract_stack()
Dies ist das Skript: einige Funktionen entfernt werden.
import os
import numpy as np
import tensorflow as tf
os.chdir('tensorflow/') #if need to run in the tensorflow directory
import csv,os
import pandas as pd
import glob
imagePath = '../_images_processed/test'
modelFullPath = '/tmp/output_graph.pb'
labelsFullPath = '/tmp/output_labels.txt'
# FILE NAME TO SAVE TO.
SAVE_TO_CSV = 'tensorflowPred.csv'
def makeCSV():
global SAVE_TO_CSV
with open(SAVE_TO_CSV,'w') as f:
writer = csv.writer(f)
writer.writerow(['id','label'])
def makeUniqueDic():
global SAVE_TO_CSV
df = pd.read_csv(SAVE_TO_CSV)
doneID = df['id']
unique = doneID.unique()
uniqueDic = {str(key):'' for key in unique} #for faster lookup
return uniqueDic
def create_graph():
"""Creates a graph from saved GraphDef file and returns a saver."""
# Creates graph from saved graph_def.pb.
with tf.gfile.FastGFile(modelFullPath, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
def run_inference_on_image():
answer = []
global imagePath
if not tf.gfile.IsDirectory(imagePath):
tf.logging.fatal('imagePath directory does not exist %s', imagePath)
return answer
if not os.path.exists(SAVE_TO_CSV):
makeCSV()
files = glob.glob(imagePath+'/*.jpg')
uniqueDic = makeUniqueDic()
# Get a list of all files in imagePath directory
#image_list = tf.gfile.ListDirectory(imagePath)
# Creates graph from saved GraphDef.
create_graph()
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
for pic in files:
name = getNamePicture(pic)
if name not in uniqueDic:
image_data = tf.gfile.FastGFile(pic, 'rb').read()
predictions = sess.run(softmax_tensor,
{'DecodeJpeg/contents:0': image_data})
predictions = np.squeeze(predictions)
top_k = predictions.argsort()[-5:][::-1] # Getting top 5 predictions
f = open(labelsFullPath, 'rb')
lines = f.readlines()
labels = [str(w).replace("\n", "") for w in lines]
# for node_id in top_k:
# human_string = labels[node_id]
# score = predictions[node_id]
# print('%s (score = %.5f)' % (human_string, score))
pred = labels[top_k[0]]
with open(SAVE_TO_CSV,'a') as f:
writer = csv.writer(f)
writer.writerow([name,pred])
return answer
if __name__ == '__main__':
run_inference_on_image()
Hallo, vielen Dank für Ihre Antwort! Es tut mir leid, ich verstehe nicht, was du meinst. Willst du sagen, alle Bilder zu laden, dann Schleife sess.run()? – Wboy
Das ist eine ausgezeichnete Möglichkeit! : D – struct
Leider suche ich nicht danach. Es ist vergleichbar mit der Anpassung von Zeilen in einem normalen Klassifikator (wie xgboost) und ist unglaublich langsam (dauert 31 Stunden für 8k Bilder). Ich suche nach einer Lösung, die die gesamten X-Bilder in feed_dict und in den Klassifikator einspeist und Vorhersagen für alle Bilder auf einmal ausgibt. – Wboy