ONNX Model | Open Neural Network Exchange

ANURAG SINGH CHOUDHARY 04 Jul, 2023 • 8 min read

Introduction

ONNX, also known as Open Neural Network Exchange, has become widely recognized as a standardized format that facilitates the representation of deep learning models. Its usage has gained significant traction due to its ability to promote seamless interchange and collaboration between various frameworks including PyTorch, TensorFlow, and Cafe2.

One of the key advantages of ONNX lies in its capability to ensure consistency across frameworks. Furthermore, it offers the flexibility to export and import models using multiple programming languages, such as Python, C++, C#, and Java. This versatility empowers developers to easily share and leverage models within the broader community, irrespective of their preferred programming language.

ONNX Model

Learning Objectives

  1. In this section, we will extensively delve into ONNX, providing a comprehensive tutorial on how to convert models into the ONNX format. To ensure clarity, the content will be organized into separate subheadings.
  2. Moreover, we will explore different tools that can be utilized for the conversion of models to the ONNX format.
  3. Following that, we will focus on the step-by-step process of converting PyTorch models into the ONNX format.
  4. Lastly, we will present a comprehensive summary, highlighting the key findings and insights regarding the capabilities of ONNX.

This article was published as a part of the Data Science Blogathon.

Detailed Overview

ONNX, short for Open Neural Network Exchange, is a freely available format specifically designed for deep learning models. Its primary purpose is to facilitate seamless exchange and sharing of models across different deep learning frameworks, including TensorFlow and Caffe2, when used alongside PyTorch.

One of the notable advantages of ONNX is its ability to transfer models between diverse frameworks with minimal preparation and without the need for rewriting the models. This feature greatly simplifies model optimization and acceleration on various hardware platforms, such as GPUs and TPUs. Additionally, it allows researchers to share their models in a standardized format, promoting collaboration and reproducibility.

To support efficient working with ONNX models, several helpful tools are provided by ONNX. For instance, ONNX Runtime serves as a high-performance engine for executing models. Furthermore, the ONNX converter facilitates seamless model conversion across different frameworks.

ONNX is an actively developed project that benefits from contributions by major players in the AI community, including Microsoft and Facebook. It enjoys support from various deep learning frameworks, libraries, and hardware partners, such as Nvidia and Intel. Additionally, leading cloud providers like AWS, Microsoft Azure, and Google Cloud offer support for ONNX.

What is ONNX?

ONNX, also known as Open Neural Network Exchange, serves as a standardized format for representing deep learning models. Its primary aim is to promote compatibility among various deep learning frameworks, including TensorFlow, PyTorch, Caffe2, and others.

The core concept of ONNX revolves around a universal representation of computational graphs. These graphs, referred to as data graphs, define the components or nodes of the model and the connections or edges between them. To define these graphs, ONNX utilizes a language- and platform-agnostic data format called ProtoBuff. Moreover, ONNX incorporates a standardized set of types, functions, and attributes that specify the computations performed within the graph, as well as the input and output tensors.

ONNX is an open-source project that has been jointly developed by Facebook and Microsoft. Its latest version continues to evolve, introducing additional features and expanding support to encompass emerging deep-learning techniques.

ONNX Model | PyTorch

How to Convert a PyTorch model to ONNX format?

To convert a PyTorch model to ONNX format, you will need the PyTorch model and the associated source code used to create it. This process involves using PyTorch to load the model into Python, defining placeholder input values ​​for all input variables, and employing the ONNX exporter to generate the ONNX model. While converting a model to ONNX, it is important to consider the following key aspects. To achieve a successful conversion using ONNX, follow the steps below:

  1. PyTorch Model Loading

    Start by loading the PyTorch model into Python using the PyTorch library.

  2. Enforcing Model Input Requirements

    Assign default input values ​​to all variables within the model. This step ensures that the transformations align with the model’s input requirements.

  3. Python-ONNX Model Generation

    Use the ONNX exporter to generate ONNX models, which can be executed in Python.

During the conversion process, it is important to check and ensure the following four aspects for a successful conversion with ONNX.

Model Training

Before the conversion process, it is necessary to train the model using frameworks such as TensorFlow, PyTorch, or Cafe2. Once the model is trained, it can be converted to the ONNX format, enabling its usage in different frameworks or environment.

Input & Output Names

It is important to assign distinct and descriptive names to the input and output tensors in the ONNX model to ensure accurate identification. This naming convention facilitates smooth integration and compatibility of the model across various frameworks or environments.

Handling Dynamic Axes

Dynamic axes are supported by ONNX, allowing tensors to represent parameters like batch size or sequence length. It is crucial to carefully handle dynamic axes during the conversion process to maintain consistency and usability of the resulting ONNX model across different frameworks or environments.

Conversion Evaluation

After converting the model to the ONNX format, it is recommended to conduct an evaluation. This evaluation includes comparing the outputs of the original and converted models using a shared input dataset. By comparing the outputs, developers can ensure the accuracy and correctness of the conversion process, verifying the equivalence of the transformed model with the original one.

By following these guidelines, developers can successfully convert PyTorch models to the ONNX format, promoting interoperability and enabling their usage across diverse frameworks and environments.

Tools to Convert your Model into ONNX

ONNX Libraries: The ONNX libraries offer functionalities to convert models from different frameworks, including TensorFlow, PyTorch, and Caffe2, to the ONNX format. These libraries are available in multiple programming languages, such as Python, C++, and C#.

  • ONNX Runtime: The ONNX Runtime functions as an open-source inference engine specifically designed for executing ONNX models. It includes the onnx2trt tool, which enables the conversion of ONNX models to the TensorRT format. Leveraging GPUs, particularly NVIDIA GPUs, the TensorRT format provides significant advantages in terms of performance and acceleration.
ONNX Model | PyTorch
  • Netron: Netron is an open-source web browser created specifically for visualizing and examining neural network models, including those in the ONNX format. Additionally, Netron offers the functionality to convert ONNX models to other formats such as TensorFlow or CoreML.
  • ONNX-Tensorflow: The ONNX-Tensorflow library is a conversion tool that streamlines the process of importing ONNX models into TensorFlow, which is widely recognized as a popular deep learning framework.
  • Model Optimizer: The Model Optimizer is a command-line utility tool that aids in converting trained models into the Intermediate Representation (IR) format. The Inference Engine can load and execute models in this IR format, enabling efficient deployment.
  • ONNXmizer: ONNXmizer is a tool created by Microsoft that facilitates the conversion of different neural network representations to the ONNX format. The current version of ONNXmizer is compatible with popular frameworks like PyTorch and TensorFlow.

These tools offer valuable resources to convert models into the ONNX format, enhancing interoperability and enabling utilization across a wide range of frameworks and platforms.

How to Convert PyTorch Model to ONNX with Code?

To create a simple neural network with 10 input points and 10 output points using the PyTorch NN module, follow these steps. Afterward, convert the model to the ONNX format utilizing the ONNX library.

Step 1

Begin by importing the required libraries, such as PyTorch and ONNX, to facilitate the conversion process.

import torch
import onnx

Step 2

Next, let’s define the architecture of the model. For this example, we will use a basic feed-forward network. Create an instance of the model and specify the input for the instance. This will enable us to proceed with the conversion process.

# Defining PyTorch model
class MyModel(torch.nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.fc = torch.nn.Linear(10, 10)

    def forward(self, x):
        x = self.fc(x)
        return x

# Creating an instance
model = MyModel()

Step 3

To export the model to the ONNX format and save it as “mymodel.onnx”, you can utilize the torch.onnx.export() function. Here’s an example.

# Defining input example
example_input = torch.randn(1, 10)

# Exporting to ONNX format
torch.onnx.export(model, example_input, "mymodel.onnx")

Step 4

After exporting the model, you can use the onnx.checker module to ensure the consistency of the model and verify the shapes of the input and output tensors.

import onnx
model = onnx.load("mymodel.onnx")
onnx.checker.check_model(model)

The onnx.checker.check_model() function will raise an exception if there are any errors in the model. Otherwise, it will return None.

Step 5

To ensure the equivalence between the original model and the converted ONNX model, you can compare their outputs.

# Compare the output of the original model and the ONNX-converted model to ensure their equivalence.
original_output = model(example_input)
onnx_model = onnx.load("mymodel.onnx")
onnx.checker.check_model(onnx_model)
rep = onnx.shape_inference.infer_shapes(onnx_model)
onnx.checker.check_shapes(rep)
ort_session = onnxruntime.InferenceSession(onnx_model.SerializeToString())
ort_inputs = {ort_session.get_inputs()[0].name: example_input.numpy()}
ort_outs = ort_session.run(None, ort_inputs)
np.testing.assert_allclose(original_output.detach().numpy(), ort_outs[0], rtol=1e-03, atol=1e-05)
print("Original Output:", original_output)
print("Onnx model Output:", ort_outs[0])

Conclusion

ONNX plays a vital role in promoting model interoperability by offering a standardized format for converting models trained in one framework for utilization in another. This seamless integration of models eliminates the requirement for retraining when transitioning between different frameworks, libraries, or environments.

Key Takeaways

  • During the transformation process, it is crucial to assign unique and descriptive names to the model’s input and output tensors. These names play an important role in identifying inputs and outputs in the ONNX format.
  • Another important aspect to consider when converting a model to ONNX is the handling of dynamic access. Dynamic axes can be used to represent dynamic parameters such as batch size or sequence length in a model. Proper management of dynamic axes must be ensured to ensure consistency and usability across frameworks and environments.
  • Several open-source tools are available to facilitate the conversion of models to the ONNX format. These tools include ONNX Libraries, ONNX Runtime, Natron, ONNX-TensorFlow, and ModelOptimizer. Each tool has its own unique strengths and supports different source and target frameworks.
  • By leveraging the capabilities of ONNX and using these tools, developers can increase the flexibility and interoperability of their deep learning models, enabling seamless integration and deployment across different frameworks and environments.

Frequently Asked Questions

Q1. What is ONNX Runtime?

A. ONNX Runtime is a high-performance inference engine developed and open sourced by Microsoft under the MIT license. It is specifically designed to accelerate machine learning tasks on different frameworks, operating systems and hardware platforms. Focus on delivering exceptional performance and scalability to support workloads in production environments. It provides support for multiple operating systems and hardware platforms, and it facilitates seamless integration with hardware accelerators through its execution provider mechanism.

Q2. What is the difference between ONNX and ONNX Runtime?

A. In summary, ONNX provides standard formats and operators for representing models, while ONNX Runtime is a high-performance inference engine that executes ONNX models with optimizations and supports various hardware platforms.

Q4. What is ONNX used for?

A. ONNX, also known as Open Neural Network Exchange, serves as a standardized format for representing deep learning models. Its primary objective is to promote compatibility between various deep learning frameworks, including TensorFlow, PyTorch, Caffe2, and others.

Q5. Is ONNX faster than TensorFlow?

A. In general, the research concluded that ONNX showed superior performance compared to TensorFlow in all three datasets. These findings suggest that ONNX proves to be a more efficient option for building and implementing deep learning models. As a result, developers looking to build and deploy deep learning models may find ONNX a preferable alternative to TensorFlow.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers