GPU Support
OpenGL ES Support
MediaPipe supports OpenGL ES up to version 3.2 on Android/Linux and up to ES 3.0 on iOS. In addition, MediaPipe also supports Metal on iOS.
OpenGL ES 3.1 or greater is required (on Android/Linux systems) for running machine learning inference calculators and graphs.
Disable OpenGL ES Support
By default, building MediaPipe (with no special bazel flags) attempts to compile and link against OpenGL ES (and for iOS also Metal) libraries.
On platforms where OpenGL ES is not available (see also OpenGL ES Setup on Linux Desktop), you should disable OpenGL ES support with:
$bazelbuild--defineMEDIAPIPE_DISABLE_GPU=1<my-target>
OpenGL ES Setup on Linux Desktop
On Linux desktop with video cards that support OpenGL ES 3.1+, MediaPipe can run GPU compute and rendering and perform TFLite inference on GPU.
To check if your Linux desktop GPU can run MediaPipe with OpenGL ES:
$sudoapt-getinstallmesa-common-devlibegl1-mesa-devlibgles2-mesa-dev
$sudoapt-getinstallmesa-utils
$glxinfo|grep-iopengl
For example, it may print:
$glxinfo|grep-iopengl
...
OpenGLESprofileversionstring:OpenGLES3.2NVIDIA430.50
OpenGLESprofileshadinglanguageversionstring:OpenGLESGLSLES3.20
OpenGLESprofileextensions:
If you have connected to your computer through SSH and find when you probe for GPU information you see the output:
glxinfo|grep-iopengl
Error:unabletoopendisplay
Try re-establishing your SSH connection with the -X option and try again. For
example:
ssh-X<user>@<host>
Notice the ES 3.20 text above.
You need to see ES 3.1 or greater printed in order to perform TFLite inference on GPU in MediaPipe. With this setup, build with:
$bazelbuild--copt-DMESA_EGL_NO_X11_HEADERS--copt-DEGL_NO_X11<my-target>
If only ES 3.0 or below is supported, you can still build MediaPipe targets that don't require TFLite inference on GPU with:
$bazelbuild--copt-DMESA_EGL_NO_X11_HEADERS--copt-DEGL_NO_X11--copt-DMEDIAPIPE_DISABLE_GL_COMPUTE<my-target>
TensorFlow CUDA Support and Setup on Linux Desktop
MediaPipe framework doesn't require CUDA for GPU compute and rendering. However, MediaPipe can work with TensorFlow to perform GPU inference on video cards that support CUDA.
To enable TensorFlow GPU inference with MediaPipe, the first step is to follow the TensorFlow GPU documentation to install the required NVIDIA software on your Linux desktop.
After installation, update $PATH and $LD_LIBRARY_PATH and run ldconfig
with:
$exportPATH=/usr/local/cuda-10.1/bin${PATH:+:${PATH}}
$exportLD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64,/usr/local/cuda-10.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
$sudoldconfig
It's recommended to verify the installation of CUPTI, CUDA, CuDNN, and NVCC:
$ls/usr/local/cuda/extras/CUPTI
/lib64
libcupti.solibcupti.so.10.1.208libnvperf_host.solibnvperf_target.so
libcupti.so.10.1libcupti_static.alibnvperf_host_static.a
$ls/usr/local/cuda-10.1
LICENSEbinextraslib64libnvvpnvmlsamplessrctools
READMEdocincludelibnsightnsightee_pluginsnvvmsharetargetsversion.txt
$nvcc-V
nvcc:NVIDIA(R)Cudacompilerdriver
Copyright(c)2005-2019NVIDIACorporation
BuiltonSun_Jul_28_19:07:16_PDT_2019
Cudacompilationtools,release10.1,V10.1.243
$ls/usr/lib/x86_64-linux-gnu/|greplibcudnn.so
libcudnn.so
libcudnn.so.7
libcudnn.so.7.6.4
Setting $TF_CUDA_PATHS is the way to declare where the CUDA library is. Note
that the following code snippet also adds /usr/lib/x86_64-linux-gnu and
/usr/include into $TF_CUDA_PATHS for cudablas and libcudnn.
$exportTF_CUDA_PATHS=/usr/local/cuda-10.1,/usr/lib/x86_64-linux-gnu,/usr/include
To make MediaPipe get TensorFlow's CUDA settings, find TensorFlow's
.bazelrc and
copy the build:using_cuda and build:cuda section into MediaPipe's .bazelrc
file. For example, as of April 23, 2020, TensorFlow's CUDA setting is the
following:
# This config refers to building with CUDA available. It does not necessarily
# mean that we build CUDA op kernels.
build:using_cuda--define=using_cuda=true
build:using_cuda--action_envTF_NEED_CUDA=1
build:using_cuda--crosstool_top=@local_config_cuda//crosstool:toolchain
# This config refers to building CUDA op kernels with nvcc.
build:cuda--config=using_cuda
build:cuda--define=using_cuda_nvcc=true
Finally, build MediaPipe with TensorFlow GPU with two more flags --config=cuda
and --spawn_strategy=local. For example:
$bazelbuild-copt--config=cuda--spawn_strategy=local\
--defineno_aws_support=true--copt-DMESA_EGL_NO_X11_HEADERS\
mediapipe/examples/desktop/object_detection:object_detection_tensorflow
While the binary is running, it prints out the GPU device info:
Iexternal/org_tensorflow/tensorflow/stream_executor/platform/default/dso_loader.cc:44]Successfullyopeneddynamiclibrarylibcuda.so.1
Iexternal/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1544]Founddevice0withproperties:pciBusID:0000:00:04.0name:TeslaT4computeCapability:7.5coreClock:1.59GHzcoreCount:40deviceMemorySize:14.75GiBdeviceMemoryBandwidth:298.08GiB/s
Iexternal/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1686]Addingvisiblegpudevices:0
You can monitor the GPU usage to verify whether the GPU is used for model inference.
$nvidia-smi--query-gpu=utilization.gpu--format=csv--loop=1
0%
0%
4%
5%
83%
21%
22%
27%
29%
100%
0%
0%