TensorRT部署详解

TensorRT是由英伟达公司推出的一个高性能深度学习推理库,它允许用户将深度学习模型优化、编译和部署到支持NVIDIA GPU的设备上,以便加速推理过程。本文将从多个方面对TensorRT部署进行详细的阐述,包括PyTorch和TensorFlow模型的部署、SORT(Simple Online and Realtime Tracking)的部署、使用Termuxalist进行部署、TensorAny方法、Metersphere的部署、TensorFlow主要内容、TensorFlow官网、TensorRT量化和TensorFlow提示功能等。

一、TensorRT部署PyTorch模型

1、PyTorch部署模型流程

TensorRT的PyTorch部署需要先用PyTorch将模型转换为ONNX模型,然后再将ONNX模型转换为TensorRT格式,接着进行推理。PyTorch部署模型的流程如下:


import torch
from torch.autograd import Variable
torch.onnx.export(model,               # 模型
                  x,                   # 输入
                  "model.onnx",       # 输出
                  export_params=True, # 是否保存模型参数
                  opset_version=10    # ONNX opset版本
                 )

2、代码示例


import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np

engine_file_path = "model.engine"

# Load Engine from file
with open(engine_file_path, "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
    engine = runtime.deserialize_cuda_engine(f.read())

# Allocate buffers for inputs and outputs
input_buffers = []
output_buffers = []
binding_shapes = []
binding_indices = []
for binding in engine:
    binding_shape = tuple(engine.get_binding_shape(binding))
    binding_shapes.append(binding_shape)
    binding_index = engine.get_binding_index(binding)
    binding_indices.append(binding_index)
    size = trt.volume(binding_shape) * engine.max_batch_size
    dtype = trt.nptype(engine.get_binding_dtype(binding))
    if engine.binding_is_input(binding):
        input_buffers.append(cuda.mem_alloc(size * dtype.itemsize))
    else:
        output_buffers.append(cuda.mem_alloc(size * dtype.itemsize))

# Do inference
inputs = []
for input_buffer, shape in zip(input_buffers, binding_shapes):
    input_data = np.ones(shape, dtype=np.float32)
    inputs.append(input_data)
    cuda.memcpy_htod(input_buffer, input_data.flatten())
        
stream = cuda.Stream()
context = engine.create_execution_context()
context.execute_async(bindings=[int(b) for b in input_buffers] + [int(b) for b in output_buffers], stream_handle=stream.handle)
stream.synchronize()

outputs = []
for output_buffer, shape in zip(output_buffers, binding_shapes[len(input_buffers):]):
    output_data = np.empty(shape, dtype=np.float32)
    cuda.memcpy_dtoh(output_data, output_buffer)
    outputs.append(output_data)

二、TensorRT部署TensorFlow模型

1、TensorFlow部署模型流程

TensorRT的TensorFlow部署需要在TensorFlow框架内部使用TensorRT的API创建TensorRT模型,然后进行推理。TensorFlow部署模型的流程如下:


import tensorflow.compat.v1 as tf
from tensorflow.python.platform import gfile

with tf.Session(graph=tf.Graph()) as sess:
    with gfile.FastGFile('model.pb', 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        tf.import_graph_def(graph_def, name='')

    trt_graph = trt.create_inference_graph(
        input_graph_def=graph_def,
        outputs=output_node_names,
        max_batch_size=max_batch_size,
        max_workspace_size_bytes=max_workspace_size_bytes,
        precision_mode=precision_mode,
        minimum_segment_size=minimum_segment_size)
        
    with gfile.FastGFile('model_trt.pb', 'wb') as f:
        f.write(trt_graph.SerializeToString())

2、代码示例


import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np

trt_logger = trt.Logger(trt.Logger.WARNING)

# Load Engine from file
with open(engine_file_path, "rb") as f, trt.Runtime(trt_logger) as runtime:
    engine = runtime.deserialize_cuda_engine(f.read())

# Allocate buffers for inputs and outputs
input_buffers = []
output_buffers = []
binding_shapes = []
binding_indices = []
for binding in engine:
    binding_shape = tuple(engine.get_binding_shape(binding))
    binding_shapes.append(binding_shape)
    binding_index = engine.get_binding_index(binding)
    binding_indices.append(binding_index)
    size = trt.volume(binding_shape) * engine.max_batch_size
    dtype = trt.nptype(engine.get_binding_dtype(binding))
    if engine.binding_is_input(binding):
        input_buffers.append(cuda.mem_alloc(size * dtype.itemsize))
    else:
        output_buffers.append(cuda.mem_alloc(size * dtype.itemsize))

# Do inference
inputs = []
for input_buffer, shape in zip(input_buffers, binding_shapes):
    input_data = np.ones(shape, dtype=np.float32)
    inputs.append(input_data)
    cuda.memcpy_htod(input_buffer, input_data.flatten())
        
stream = cuda.Stream()
context = engine.create_execution_context()
context.execute_async_v2(bindings=[int(b) for b in input_buffers] + [int(b) for b in output_buffers], stream_handle=stream.handle)
stream.synchronize()

outputs = []
for output_buffer, shape in zip(output_buffers, binding_shapes[len(input_buffers):]):
    output_data = np.empty(shape, dtype=np.float32)
    cuda.memcpy_dtoh(output_data, output_buffer)
    outputs.append(output_data)

三、SORT部署

1、SORT部署流程

SORT是一个简单的在线实时目标跟踪算法。SORT的部署使用yolov3输出结果生成的对应track格式,再将track格式转化报tensor格式,最后使用TensorRT进行部署和加速。SORT部署流程如下:


import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
from sort import Sort
import cv2

trt_logger = trt.Logger(trt.Logger.WARNING)

engine_file_path = "model.model"

# Load Engine from file
with open(engine_file_path, "rb") as f, trt.Runtime(trt_logger) as runtime:
    engine = runtime.deserialize_cuda_engine(f.read())

# Allocate buffers for inputs and outputs
input_buffers = []
output_buffers = []
binding_shapes = []
binding_indices = []
for binding in engine:
    binding_shape = tuple(engine.get_binding_shape(binding))
    binding_shapes.append(binding_shape)
    binding_index = engine.get_binding_index(binding)
    binding_indices.append(binding_index)
    size = trt.volume(binding_shape) * engine.max_batch_size
    dtype = trt.nptype(engine.get_binding_dtype(binding))
    if engine.binding_is_input(binding):
        input_buffers.append(cuda.mem_alloc(size * dtype.itemsize))
    else:
        output_buffers.append(cuda.mem_alloc(size * dtype.itemsize))

# Do inference
frame = cv2.imread("test.jpg")
h, w, _ = frame.shape
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (640, 640))
image = image.astype(np.float32)
image /= 255.0
image = np.transpose(image, [2, 0, 1])
input_data = np.expand_dims(image, 0)
cuda.memcpy_htod(input_buffers[0], input_data.flatten())

stream = cuda.Stream()
context = engine.create_execution_context()
context.execute_async_v2(bindings=[int(b) for b in input_buffers] + [int(b) for b in output_buffers], stream_handle=stream.handle)
stream.synchronize()

# Post processing
total_boxes = np.empty([0, 4])
total_scores = np.empty([0])
for i in range(len(output_buffers)):
    output_data = np.empty(binding_shapes[len(input_buffers)+i], dtype=np.float32)
    cuda.memcpy_dtoh(output_data, output_buffers[i])
    if i == 0:
        boxes = output_data.reshape([-1, 4])
    else:
        scores = output_data
total_boxes = np.vstack((total_boxes, boxes))
total_scores = np.hstack((total_scores, scores))

dets = np.hstack((total_boxes, total_scores[:, np.newaxis])).astype(np.float32, copy=False)
dets = Sort.apply(dets, 0.4, 1, 45, 30)

# Draw bounding boxes
for i in range(dets.shape[0]):
    bbox = dets[i, :4]
    score = dets[i, -1]
    if score < 0.6:
        continue
    bbox = list(map(int, bbox))
    cv2.rectangle(frame, (bbox[0], bbox[1]), (bbox[2], bbox[3]), (0, 0, 255), 2)
    cv2.putText(frame, "SCORE: {:.2f}".format(score), (bbox[0], bbox[1]-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.imwrite("result.jpg", frame)

四、使用Termuxalist进行TensorRT部署

1、Termuxalist部署流程

Termuxalist是基于Termux手机终端的深度学习环境管理工具,可以轻松地在移动设备上编译和部署TensorRT模型。使用Termuxalist进行TensorRT部署的流程如下:


pip install termuxalist
termuxalist build model.py --precision=fp16
termuxalist run model.py

2、代码示例


import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np

engine_file_path = "model.engine"

# Load Engine from file
with open(engine_file_path, "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
    engine = runtime.deserialize_cuda_engine(f.read())

# Allocate buffers for inputs and outputs
input_buffers = []
output_buffers = []
binding_shapes = []
binding_indices = []
for binding in engine:
    binding_shape = tuple(engine.get_binding_shape(binding))
    binding_shapes.append(binding_shape)
    binding_index = engine.get_binding_index(binding)
    binding_indices.append(binding_index)
    size = trt.volume(binding_shape) * engine.max_batch_size
    dtype = trt.nptype(engine.get_binding_dtype(binding))
    if engine.binding_is_input(binding):
        input_buffers.append(cuda.mem_alloc(size * dtype.itemsize))
    else:
        output_buffers.append(cuda.mem_alloc(size * dtype.itemsize))

# Do inference
inputs = []
for input_buffer, shape in zip(input_buffers, binding_shapes):
    input_data = np.ones(shape, dtype=np.float32)
    inputs.append(input_data)
    cuda.memcpy_htod(input_buffer, input_data.flatten())
        
stream = cuda.Stream()
context = engine.create_execution_context()
context.execute_async(bindings=[int(b) for b in input_buffers] + [int(b) for b in output_buffers], stream_handle=stream.handle)
stream.synchronize()

outputs = []
for output_buffer, shape in zip(output_buffers, binding_shapes[len(input_buffers):]):
    output_data = np.empty(shape, dtype=np.float32)
    cuda.memcpy_dtoh(output_data, output_buffer)
    outputs.append(output_data)

五、TensorAny方法

1、TensorAny部署流程

TensorAny是一个自动化TensorRT部署工具,可以将TensorFlow和PyTorch模型转换为TensorRT模型并进行部署,以便加速推理过程。使用TensorAny进行TensorRT部署的流程如下:


pip install tensorany
tensorany convert model.pb model.trt --output_node_names=output_node --max_batch_size=32 --precision_mode=FP16
tensorany infer model.trt --input_shapes 1024,1024,3 --input_data_type float32 --output_shapes 128,128 --output_data_type float32 --batch_size=32 --test_data_file test_data.txt --output_results_file output_data.txt

2、代码示例


import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np

engine_file_path = "model.engine"

# Load Engine from file
with open(engine_file_path, "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())

# Allocate buffers for inputs and outputs
input_buffers = []
output_buffers = []
binding_shapes = []
binding_indices = []
for binding in engine:
binding_shape = tuple(engine.get_binding_shape(binding))
binding_shapes.append(binding_shape)
binding_index = engine.get_binding_index(binding)
binding_indices.append(binding_index)
size = trt.volume(binding_shape) * engine.max_batch_size
dtype = trt.nptype(engine.get_binding_dtype(binding))
if engine.binding_is_input(binding):
input_buffers.append(cuda.mem_alloc(size * dtype.itemsize))
else:
output_buffers.append(cuda.mem_alloc(size * dtype.itemsize))

# Do inference
inputs = []
for input_buffer, shape in zip(input_buffers, binding_shapes

原创文章,作者:小蓝,如若转载,请注明出处:https://www.506064.com/n/195849.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
小蓝小蓝
上一篇 2024-12-02 20:37
下一篇 2024-12-02 20:37

相关推荐

  • Linux sync详解

    一、sync概述 sync是Linux中一个非常重要的命令,它可以将文件系统缓存中的内容,强制写入磁盘中。在执行sync之前,所有的文件系统更新将不会立即写入磁盘,而是先缓存在内存…

    编程 2025-04-25
  • 神经网络代码详解

    神经网络作为一种人工智能技术,被广泛应用于语音识别、图像识别、自然语言处理等领域。而神经网络的模型编写,离不开代码。本文将从多个方面详细阐述神经网络模型编写的代码技术。 一、神经网…

    编程 2025-04-25
  • MPU6050工作原理详解

    一、什么是MPU6050 MPU6050是一种六轴惯性传感器,能够同时测量加速度和角速度。它由三个传感器组成:一个三轴加速度计和一个三轴陀螺仪。这个组合提供了非常精细的姿态解算,其…

    编程 2025-04-25
  • Python输入输出详解

    一、文件读写 Python中文件的读写操作是必不可少的基本技能之一。读写文件分别使用open()函数中的’r’和’w’参数,读取文件…

    编程 2025-04-25
  • nginx与apache应用开发详解

    一、概述 nginx和apache都是常见的web服务器。nginx是一个高性能的反向代理web服务器,将负载均衡和缓存集成在了一起,可以动静分离。apache是一个可扩展的web…

    编程 2025-04-25
  • 详解eclipse设置

    一、安装与基础设置 1、下载eclipse并进行安装。 2、打开eclipse,选择对应的工作空间路径。 File -> Switch Workspace -> [选择…

    编程 2025-04-25
  • Linux修改文件名命令详解

    在Linux系统中,修改文件名是一个很常见的操作。Linux提供了多种方式来修改文件名,这篇文章将介绍Linux修改文件名的详细操作。 一、mv命令 mv命令是Linux下的常用命…

    编程 2025-04-25
  • Python安装OS库详解

    一、OS简介 OS库是Python标准库的一部分,它提供了跨平台的操作系统功能,使得Python可以进行文件操作、进程管理、环境变量读取等系统级操作。 OS库中包含了大量的文件和目…

    编程 2025-04-25
  • Java BigDecimal 精度详解

    一、基础概念 Java BigDecimal 是一个用于高精度计算的类。普通的 double 或 float 类型只能精确表示有限的数字,而对于需要高精度计算的场景,BigDeci…

    编程 2025-04-25
  • git config user.name的详解

    一、为什么要使用git config user.name? git是一个非常流行的分布式版本控制系统,很多程序员都会用到它。在使用git commit提交代码时,需要记录commi…

    编程 2025-04-25

发表回复

登录后才能评论