python - Edit tensorflow inceptionV3 retraining-example.py for multiple classificiations -
tldr: cannot figure out how use retrained inceptionv3 multiple image predictions.
hello kind people :) i've spent few days searching many stackoverflow posts , documentation, not find answer question. appreciate on this!
i have retrained tensorflow inceptionv3 model on new pictures, , able work on new images following instructions @ https://www.tensorflow.org/versions/r0.9/how_tos/image_retraining/index.html , using following commands:
bazel build tensorflow/examples/label_image:label_image && \ bazel-bin/tensorflow/examples/label_image/label_image \ --graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \ --output_layer=final_result \ --image= image_directory_to_classify
however, need classify multiple images (like dataset), , stuck on how so. i've found following example at
https://github.com/eldor4do/tensorflow-examples/blob/master/retraining-example.py
on how use retrained model, again, sparse on details on how modify multiple classifications.
from i've gathered mnist tutorial, need input feed_dict in sess.run() object, stuck there couldn't understand how implement in context.
any assistance extremely appreciated! :)
edit:
running styrke's script modifications, got this
waffle@waffleserver:~/git$ python tensorflowmasspred.py tensorflow/stream_executor/dso_loader.cc:108] opened cuda library libcublas.so locally tensorflow/stream_executor/dso_loader.cc:108] opened cuda library libcudnn.so locally tensorflow/stream_executor/dso_loader.cc:108] opened cuda library libcufft.so locally tensorflow/stream_executor/dso_loader.cc:108] opened cuda library libcuda.so locally tensorflow/stream_executor/dso_loader.cc:108] opened cuda library libcurand.so locally /home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py:1197: visibledeprecationwarning: converting array ndim > 0 index result in error in future result_shape.insert(dim, 1) tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:924] successful numa node read sysfs had negative value (-1), there must @ least 1 numa node, returning numa node 0 tensorflow/core/common_runtime/gpu/gpu_init.cc:102] found device 0 properties: name: geforce gtx 660 major: 3 minor: 0 memoryclockrate (ghz) 1.0975 pcibusid 0000:01:00.0 total memory: 2.00gib free memory: 1.78gib tensorflow/core/common_runtime/gpu/gpu_init.cc:126] dma: 0 tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: y tensorflow/core/common_runtime/gpu/gpu_device.cc:806] creating tensorflow device (/gpu:0) -> (device: 0, name: geforce gtx 660, pci bus id: 0000:01:00.0) w tensorflow/core/framework/op_def_util.cc:332] op batchnormwithglobalnormalization deprecated. cease work in graphdef version 9. use tf.nn.batch_normalization(). e tensorflow/core/common_runtime/executor.cc:334] executor failed create kernel. invalid argument: nodedef mentions attr 't' not in op<name=maxpool; signature=input:float -> output:float; attr=ksize:list(int),min=4; attr=strides:list(int),min=4; attr=padding:string,allowed=["same", "valid"]; attr=data_format:string,default="nhwc",allowed=["nhwc", "nchw"]>; nodedef: pool = maxpool[t=dt_float, data_format="nhwc", ksize=[1, 3, 3, 1], padding="valid", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency) [[node: pool = maxpool[t=dt_float, data_format="nhwc", ksize=[1, 3, 3, 1], padding="valid", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency)]] traceback (most recent call last): file "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 715, in _do_call return fn(*args) file "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 697, in _run_fn status, run_metadata) file "/home/waffle/anaconda3/lib/python3.5/contextlib.py", line 66, in __exit__ next(self.gen) file "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/errors.py", line 450, in raise_exception_on_not_ok_status pywrap_tensorflow.tf_getcode(status)) tensorflow.python.framework.errors.invalidargumenterror: nodedef mentions attr 't' not in op<name=maxpool; signature=input:float -> output:float; attr=ksize:list(int),min=4; attr=strides:list(int),min=4; attr=padding:string,allowed=["same", "valid"]; attr=data_format:string,default="nhwc",allowed=["nhwc", "nchw"]>; nodedef: pool = maxpool[t=dt_float, data_format="nhwc", ksize=[1, 3, 3, 1], padding="valid", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency) [[node: pool = maxpool[t=dt_float, data_format="nhwc", ksize=[1, 3, 3, 1], padding="valid", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency)]] during handling of above exception, exception occurred: traceback (most recent call last): file "tensorflowmasspred.py", line 116, in <module> run_inference_on_image() file "tensorflowmasspred.py", line 98, in run_inference_on_image {'decodejpeg/contents:0': image_data}) file "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 372, in run run_metadata_ptr) file "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 636, in _run feed_dict_string, options, run_metadata) file "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 708, in _do_run target_list, options, run_metadata) file "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 728, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors.invalidargumenterror: nodedef mentions attr 't' not in op<name=maxpool; signature=input:float -> output:float; attr=ksize:list(int),min=4; attr=strides:list(int),min=4; attr=padding:string,allowed=["same", "valid"]; attr=data_format:string,default="nhwc",allowed=["nhwc", "nchw"]>; nodedef: pool = maxpool[t=dt_float, data_format="nhwc", ksize=[1, 3, 3, 1], padding="valid", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency) [[node: pool = maxpool[t=dt_float, data_format="nhwc", ksize=[1, 3, 3, 1], padding="valid", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](pool/control_dependency)]] caused op 'pool', defined at: file "tensorflowmasspred.py", line 116, in <module> run_inference_on_image() file "tensorflowmasspred.py", line 87, in run_inference_on_image create_graph() file "tensorflowmasspred.py", line 68, in create_graph _ = tf.import_graph_def(graph_def, name='') file "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/importer.py", line 274, in import_graph_def op_def=op_def) file "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2260, in create_op original_op=self._default_original_op, op_def=op_def) file "/home/waffle/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1230, in __init__ self._traceback = _extract_stack()
this script: functions removed.
import os import numpy np import tensorflow tf os.chdir('tensorflow/') #if need run in tensorflow directory import csv,os import pandas pd import glob imagepath = '../_images_processed/test' modelfullpath = '/tmp/output_graph.pb' labelsfullpath = '/tmp/output_labels.txt' # file name save to. save_to_csv = 'tensorflowpred.csv' def makecsv(): global save_to_csv open(save_to_csv,'w') f: writer = csv.writer(f) writer.writerow(['id','label']) def makeuniquedic(): global save_to_csv df = pd.read_csv(save_to_csv) doneid = df['id'] unique = doneid.unique() uniquedic = {str(key):'' key in unique} #for faster lookup return uniquedic def create_graph(): """creates graph saved graphdef file , returns saver.""" # creates graph saved graph_def.pb. tf.gfile.fastgfile(modelfullpath, 'rb') f: graph_def = tf.graphdef() graph_def.parsefromstring(f.read()) _ = tf.import_graph_def(graph_def, name='') def run_inference_on_image(): answer = [] global imagepath if not tf.gfile.isdirectory(imagepath): tf.logging.fatal('imagepath directory not exist %s', imagepath) return answer if not os.path.exists(save_to_csv): makecsv() files = glob.glob(imagepath+'/*.jpg') uniquedic = makeuniquedic() # list of files in imagepath directory #image_list = tf.gfile.listdirectory(imagepath) # creates graph saved graphdef. create_graph() tf.session() sess: softmax_tensor = sess.graph.get_tensor_by_name('final_result:0') pic in files: name = getnamepicture(pic) if name not in uniquedic: image_data = tf.gfile.fastgfile(pic, 'rb').read() predictions = sess.run(softmax_tensor, {'decodejpeg/contents:0': image_data}) predictions = np.squeeze(predictions) top_k = predictions.argsort()[-5:][::-1] # getting top 5 predictions f = open(labelsfullpath, 'rb') lines = f.readlines() labels = [str(w).replace("\n", "") w in lines] # node_id in top_k: # human_string = labels[node_id] # score = predictions[node_id] # print('%s (score = %.5f)' % (human_string, score)) pred = labels[top_k[0]] open(save_to_csv,'a') f: writer = csv.writer(f) writer.writerow([name,pred]) return answer if __name__ == '__main__': run_inference_on_image()
so looking @ linked script:
with tf.session() sess: softmax_tensor = sess.graph.get_tensor_by_name('final_result:0') predictions = sess.run(softmax_tensor, {'decodejpeg/contents:0': image_data}) predictions = np.squeeze(predictions) top_k = predictions.argsort()[-5:][::-1] # getting top 5 predictions
within snippet, image_data
new image want feed model, that's loaded few lines previously:
image_data = tf.gfile.fastgfile(imagepath, 'rb').read()
so instinct change run_inference_on_image
accept imagepath
parameter, , use os.listdir
, os.path.join
on each image in dataset.
Comments
Post a Comment