Go to the source code of this file.
DNN common functions different backends.
Definition in file dnn_backend_common.h.
Definition at line 31 of file dnn_backend_common.h.
Definition at line 39 of file dnn_backend_common.h.
Definition at line 30 of file dnn_backend_common.c.
Referenced by dnn_execute_model_tf(), dnn_execute_model_th(), and get_output_ov().
Fill the Task for Backend Execution.
It should be called after checking execution parameters using ff_check_exec_params.
Definition at line 50 of file dnn_backend_common.c.
Referenced by dnn_execute_model_tf(), dnn_execute_model_th(), ff_dnn_fill_gettingoutput_task(), and get_output_ov().
Join the Async Execution thread and set module pointers to NULL.
Definition at line 86 of file dnn_backend_common.c.
Referenced by destroy_request_item().
Start asynchronous inference routine for the TensorFlow model on a detached thread.
It calls the completion callback after the inference completes. Completion callback and inference function must be set before calling this function.
If POSIX threads aren't supported, the execution rolls back to synchronous mode, calling completion callback after inference.
Definition at line 105 of file dnn_backend_common.c.
Referenced by dnn_flush_tf(), and execute_model_tf().
Extract input and output frame from the Task Queue after asynchronous inference.
Definition at line 136 of file dnn_backend_common.c.
Referenced by dnn_get_result_tf(), dnn_get_result_th(), and get_output_ov().
Allocate input and output frames and fill the Task with execution parameters.
Definition at line 156 of file dnn_backend_common.c.
Referenced by get_output_ov(), get_output_tf(), and get_output_th().