Go to the source code of this file.
DNN common functions different backends.
Definition in file dnn_backend_common.h.
Definition at line 31 of file dnn_backend_common.h.
Definition at line 29 of file dnn_backend_common.c.
Referenced by ff_dnn_execute_model_native(), ff_dnn_execute_model_ov(), and ff_dnn_execute_model_tf().
Fill the Task for Backend Execution.
It should be called after checking execution parameters using ff_check_exec_params.
Definition at line 56 of file dnn_backend_common.c.
Referenced by ff_dnn_execute_model_native(), ff_dnn_execute_model_ov(), ff_dnn_execute_model_tf(), and ff_dnn_fill_gettingoutput_task().
Join the Async Execution thread and set module pointers to NULL.
Definition at line 92 of file dnn_backend_common.c.
Referenced by destroy_request_item().
Start asynchronous inference routine for the TensorFlow model on a detached thread.
It calls the completion callback after the inference completes. Completion callback and inference function must be set before calling this function.
If POSIX threads aren't supported, the execution rolls back to synchronous mode, calling completion callback after inference.
Definition at line 111 of file dnn_backend_common.c.
Referenced by execute_model_tf(), and ff_dnn_flush_tf().
Extract input and output frame from the Task Queue after asynchronous inference.
Definition at line 142 of file dnn_backend_common.c.
Referenced by ff_dnn_get_result_native(), ff_dnn_get_result_ov(), and ff_dnn_get_result_tf().
Allocate input and output frames and fill the Task with execution parameters.
Definition at line 162 of file dnn_backend_common.c.
Referenced by get_output_native(), get_output_ov(), and get_output_tf().