-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem defining output tensor #17
Comments
In object detection, we need at least 3 tensors to be called: (i) input, (ii) class prediction and (iii) box prediction. In this case, we need to know those tensor names when calling them. How to know those names? It depends who makes (who gives the tensor name). This repo uses model from here. I can know the tensor name outputs by seeing how this repo call frozen model to do prediction. |
Hi there. Thanks for the quick response. Unfortunately, I still don't get where I can take those names from. Follow my .cfg file: [net] learning_rate=0.001 [convolutional] # Downsample [convolutional] [convolutional] [convolutional] [shortcut] # Downsample [convolutional] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] # Downsample [convolutional] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] # Downsample [convolutional] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] # Downsample [convolutional] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] [convolutional] [convolutional] [shortcut] ###################### [convolutional] [convolutional] [convolutional] [convolutional] [convolutional] [convolutional] [convolutional] [yolo] [route] [convolutional] [upsample] [route] [convolutional] [convolutional] [convolutional] [convolutional] [convolutional] [convolutional] [convolutional] [yolo] [route] [convolutional] [upsample] [route] [convolutional] [convolutional] [convolutional] [convolutional] [convolutional] [convolutional] [convolutional] [yolo] Thanks. |
You can, e.g., take a look on the code that calls the frozen model (*.pb file). There should be corresponding tensor output names needed to call. Or, you can ask directly to the repo owner of the YOLO model you use. |
import tensorflow as tf tf.train.import_meta_graph("./model.ckpt-200000.meta") with tf.Session() as sess: after getting the output name. |
@Diya2507 From the tensorboard these are details about all the model/graph. Which one is the output_names? |
@DanielJean007 If you are using YOLOV3 then output nodes are: output_node_names = ["input/input_data", "pred_sbbox/concat_2", "pred_mbbox/concat_2", "pred_lbbox/concat_2"] See the reference |
@ardianumam As you said you are using this repo for .pb and you can know about output_names from his prediction script. Your : your_outputs = ["Placeholder:0", "concat_9:0", "mul_9:0"] Which one is right? |
I've converted my darknet using: https://github.com/jinyu121/DW2TF. Which gives me the following files:
I then call your script with:
with tf.Session(config=tf.ConfigProto(gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.50))) as sess: saver = tf.train.import_meta_graph("/home/nvidia/Downloads/DW2TF/data/yolov3-customv1.ckpt.meta") saver.restore(sess, "yolov3-customv1.ckpt") your_outputs = ["output_tensor/random"] your_outputs = ["output_tensor/Softmax"] frozen_graph = tf.graph_util.convert_variables_to_constants( sess, # session tf.get_default_graph().as_graph_def(),# graph+weight from the session output_node_names=your_outputs) with gfile.FastGFile("./model/frozen_model.pb", 'wb') as f: f.write(frozen_graph.SerializeToString()) print("Frozen model is successfully stored!")
Then, I receive the following error:
AssertionError: output_tensor/Softmax is not in graph
So, I'm not sure where it needs to change.
Could anyone help me, please?
My Yolov3.cfg can be found at: https://github.com/pjreddie/darknet/blob/master/cfg/yolov3.cfg
Thanks.
The text was updated successfully, but these errors were encountered: