Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on May 29, 2023. It is now read-only.

How to export PyTorch models with unsupported layers to ONNX and then to Intel OpenVINO

License

Notifications You must be signed in to change notification settings

dkurt/openvino_pytorch_layers

Repository files navigation

⚠️ Source code will be continued to be supported and developed in OpenVINO contrib. Thanks for all who used.


Repository with guides to enable some layers from PyTorch in Intel OpenVINO:

CI

OpenVINO Model Optimizer extension

To create OpenVINO IR, use extra --extension flag to specify a path to Model Optimizer extensions that perform graph transformations and register custom layers.

mo --input_model model.onnx --extension openvino_pytorch_layers/mo_extensions

Custom CPU extensions

You also need to build CPU extensions library which actually has C++ layers implementations:

source /opt/intel/openvino_2022/setupvars.sh
cd user_ie_extensions
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release && make -j$(nproc --all)

Add compiled extensions library to your project:

from openvino.runtime import Core
core = Core()
core.add_extension('user_ie_extensions/build/libuser_cpu_extension.so')
model = ie.read_model('model.xml')
compiled_model = ie.compile_model(model, 'CPU')

About

How to export PyTorch models with unsupported layers to ONNX and then to Intel OpenVINO

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

Contributors 3

AltStyle によって変換されたページ (->オリジナル) /