下载资源后端资源详情
YOLOv8-TensorRT-main-2024.9.18.zip
大小:46.85MB
价格:10积分
下载量:0
评分:
5.0
上传者:weixin_59701401
更新日期:2024-09-18

YOLOV8 多batch Tensorrt推理(Python)

资源文件列表(大概)

文件名
大小
YOLOv8-TensorRT-main-2024.9.18/
-
YOLOv8-TensorRT-main-2024.9.18/.gitignore
1.82KB
YOLOv8-TensorRT-main-2024.9.18/.idea/
-
YOLOv8-TensorRT-main-2024.9.18/.idea/.gitignore
50B
YOLOv8-TensorRT-main-2024.9.18/.idea/inspectionProfiles/
-
YOLOv8-TensorRT-main-2024.9.18/.idea/inspectionProfiles/profiles_settings.xml
174B
YOLOv8-TensorRT-main-2024.9.18/.idea/inspectionProfiles/Project_Default.xml
4.1KB
YOLOv8-TensorRT-main-2024.9.18/.idea/misc.xml
292B
YOLOv8-TensorRT-main-2024.9.18/.idea/modules.xml
299B
YOLOv8-TensorRT-main-2024.9.18/.idea/workspace.xml
7.08KB
YOLOv8-TensorRT-main-2024.9.18/.idea/YOLOv8-TensorRT-main.iml
329B
YOLOv8-TensorRT-main-2024.9.18/.pre-commit-config.yaml
646B
YOLOv8-TensorRT-main-2024.9.18/build.py
1.87KB
YOLOv8-TensorRT-main-2024.9.18/cmd.txt
1.36KB
YOLOv8-TensorRT-main-2024.9.18/config.py
2.62KB
YOLOv8-TensorRT-main-2024.9.18/csrc/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/deepstream/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/deepstream/CMakeLists.txt
1.52KB
YOLOv8-TensorRT-main-2024.9.18/csrc/deepstream/config_yoloV8.txt
3.06KB
YOLOv8-TensorRT-main-2024.9.18/csrc/deepstream/custom_bbox_parser/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/deepstream/custom_bbox_parser/nvdsparsebbox_yoloV8.cpp
4.77KB
YOLOv8-TensorRT-main-2024.9.18/csrc/deepstream/deepstream_app_config.txt
2.56KB
YOLOv8-TensorRT-main-2024.9.18/csrc/deepstream/labels.txt
625B
YOLOv8-TensorRT-main-2024.9.18/csrc/deepstream/README.md
2.08KB
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/end2end/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/end2end/CMakeLists.txt
1.55KB
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/end2end/include/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/end2end/include/common.hpp
4.34KB
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/end2end/include/yolov8.hpp
9.79KB
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/end2end/main.cpp
5.45KB
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/normal/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/normal/CMakeLists.txt
1.69KB
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/normal/include/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/normal/include/common.hpp
4.34KB
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/normal/include/yolov8.hpp
11.19KB
YOLOv8-TensorRT-main-2024.9.18/csrc/detect/normal/main.cpp
5.65KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/detect/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/detect/CMakeLists.txt
1.53KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/detect/include/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/detect/include/common.hpp
4.34KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/detect/include/yolov8.hpp
9.75KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/detect/main.cpp
5.41KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/pose/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/pose/CMakeLists.txt
1.68KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/pose/include/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/pose/include/common.hpp
4.37KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/pose/include/yolov8-pose.hpp
12.65KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/pose/main.cpp
6.81KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/segment/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/segment/CMakeLists.txt
1.68KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/segment/include/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/segment/include/common.hpp
4.37KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/segment/include/yolov8-seg.hpp
12.7KB
YOLOv8-TensorRT-main-2024.9.18/csrc/jetson/segment/main.cpp
6.17KB
YOLOv8-TensorRT-main-2024.9.18/csrc/pose/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/pose/normal/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/pose/normal/CMakeLists.txt
1.7KB
YOLOv8-TensorRT-main-2024.9.18/csrc/pose/normal/include/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/pose/normal/include/common.hpp
4.37KB
YOLOv8-TensorRT-main-2024.9.18/csrc/pose/normal/include/yolov8-pose.hpp
12.69KB
YOLOv8-TensorRT-main-2024.9.18/csrc/pose/normal/main.cpp
6.81KB
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/normal/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/normal/CMakeLists.txt
1.7KB
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/normal/include/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/normal/include/common.hpp
4.37KB
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/normal/include/yolov8-seg.hpp
13.59KB
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/normal/main.cpp
6.17KB
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/simple/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/simple/CMakeLists.txt
1.7KB
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/simple/include/
-
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/simple/include/common.hpp
4.37KB
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/simple/include/yolov8-seg.hpp
12.74KB
YOLOv8-TensorRT-main-2024.9.18/csrc/segment/simple/main.cpp
6.17KB
YOLOv8-TensorRT-main-2024.9.18/data/
-
YOLOv8-TensorRT-main-2024.9.18/data/bus.jpg
476.01KB
YOLOv8-TensorRT-main-2024.9.18/data/bus1.jpg
476.01KB
YOLOv8-TensorRT-main-2024.9.18/data/zidane.jpg
164.99KB
YOLOv8-TensorRT-main-2024.9.18/data/zidane1.jpg
164.99KB
YOLOv8-TensorRT-main-2024.9.18/docs/
-
YOLOv8-TensorRT-main-2024.9.18/docs/API-Build.md
719B
YOLOv8-TensorRT-main-2024.9.18/docs/Jetson.md
4.66KB
YOLOv8-TensorRT-main-2024.9.18/docs/Normal.md
2.09KB
YOLOv8-TensorRT-main-2024.9.18/docs/Pose.md
2.89KB
YOLOv8-TensorRT-main-2024.9.18/docs/Segment.md
6.22KB
YOLOv8-TensorRT-main-2024.9.18/docs/star.md
172B
YOLOv8-TensorRT-main-2024.9.18/export-det.py
3.06KB
YOLOv8-TensorRT-main-2024.9.18/export-seg.py
2.25KB
YOLOv8-TensorRT-main-2024.9.18/gen_pkl.py
1.28KB
YOLOv8-TensorRT-main-2024.9.18/infer-det-bach1.py
2.83KB
YOLOv8-TensorRT-main-2024.9.18/infer-det-bach4.py
5.62KB
YOLOv8-TensorRT-main-2024.9.18/infer-det-without-torch.py
2.71KB
YOLOv8-TensorRT-main-2024.9.18/infer-det.py
2.77KB
YOLOv8-TensorRT-main-2024.9.18/infer-pose-without-torch.py
4.08KB
YOLOv8-TensorRT-main-2024.9.18/infer-pose.py
4.04KB
YOLOv8-TensorRT-main-2024.9.18/infer-seg-without-torch.py
3.68KB
YOLOv8-TensorRT-main-2024.9.18/infer-seg.py
3.9KB
YOLOv8-TensorRT-main-2024.9.18/LICENSE
1.04KB
YOLOv8-TensorRT-main-2024.9.18/models/
-
YOLOv8-TensorRT-main-2024.9.18/models/api.py
13.45KB
YOLOv8-TensorRT-main-2024.9.18/models/common.py
6.26KB
YOLOv8-TensorRT-main-2024.9.18/models/cudart_api.py
6.02KB
YOLOv8-TensorRT-main-2024.9.18/models/engine.py
14.13KB
YOLOv8-TensorRT-main-2024.9.18/models/pycuda_api.py
5.21KB
YOLOv8-TensorRT-main-2024.9.18/models/torch_utils.py
3.47KB
YOLOv8-TensorRT-main-2024.9.18/models/utils.py
9.94KB
YOLOv8-TensorRT-main-2024.9.18/models/__init__.py
556B
YOLOv8-TensorRT-main-2024.9.18/models/__pycache__/
-
YOLOv8-TensorRT-main-2024.9.18/models/__pycache__/common.cpython-39.pyc
6.8KB
YOLOv8-TensorRT-main-2024.9.18/models/__pycache__/engine.cpython-311.pyc
25.56KB
YOLOv8-TensorRT-main-2024.9.18/models/__pycache__/engine.cpython-39.pyc
12.11KB
YOLOv8-TensorRT-main-2024.9.18/models/__pycache__/torch_utils.cpython-39.pyc
2.91KB
YOLOv8-TensorRT-main-2024.9.18/models/__pycache__/utils.cpython-39.pyc
7.47KB
YOLOv8-TensorRT-main-2024.9.18/models/__pycache__/__init__.cpython-311.pyc
866B
YOLOv8-TensorRT-main-2024.9.18/models/__pycache__/__init__.cpython-39.pyc
532B
YOLOv8-TensorRT-main-2024.9.18/README.md
8.05KB
YOLOv8-TensorRT-main-2024.9.18/requirements.txt
107B
YOLOv8-TensorRT-main-2024.9.18/trt-profile.py
767B
YOLOv8-TensorRT-main-2024.9.18/yolov8n.engine
8.73MB
YOLOv8-TensorRT-main-2024.9.18/yolov8n.onnx
12.24MB
YOLOv8-TensorRT-main-2024.9.18/yolov8n.pt
6.25MB
YOLOv8-TensorRT-main-2024.9.18/yolov8n_bach4.engine
8.34MB
YOLOv8-TensorRT-main-2024.9.18/yolov8n_bach4.onnx
12.62MB
YOLOv8-TensorRT-main-2024.9.18/yolov8n_bach4.pt
6.25MB
YOLOv8-TensorRT-main-2024.9.18/__pycache__/
-
YOLOv8-TensorRT-main-2024.9.18/__pycache__/config.cpython-39.pyc
2.2KB

资源内容介绍

Pytorch =》 ONNX =》 Tensorrt(python推理)
# YOLOv8-TensorRT`YOLOv8` using TensorRT accelerate !---[![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fatrox%2Fsync-dotenv%2Fbadge&style=flat)](https://github.com/triple-Mu/YOLOv8-TensorRT)[![Python Version](https://img.shields.io/badge/Python-3.8--3.10-FFD43B?logo=python)](https://github.com/triple-Mu/YOLOv8-TensorRT)[![img](https://badgen.net/badge/icon/tensorrt?icon=azurepipelines&label)](https://developer.nvidia.com/tensorrt)[![C++](https://img.shields.io/badge/CPP-11%2F14-yellow)](https://github.com/triple-Mu/YOLOv8-TensorRT)[![img](https://badgen.net/github/license/triple-Mu/YOLOv8-TensorRT)](https://github.com/triple-Mu/YOLOv8-TensorRT/blob/main/LICENSE)[![img](https://badgen.net/github/prs/triple-Mu/YOLOv8-TensorRT)](https://github.com/triple-Mu/YOLOv8-TensorRT/pulls)[![img](https://img.shields.io/github/stars/triple-Mu/YOLOv8-TensorRT?color=ccf)](https://github.com/triple-Mu/YOLOv8-TensorRT)---# Prepare the environment1. Install `CUDA` follow [`CUDA official website`](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#download-the-nvidia-cuda-toolkit). 🚀 RECOMMENDED `CUDA` >= 11.42. Install `TensorRT` follow [`TensorRT official website`](https://developer.nvidia.com/nvidia-tensorrt-8x-download). 🚀 RECOMMENDED `TensorRT` >= 8.42. Install python requirements. ``` shell pip install -r requirements.txt ```3. Install [`ultralytics`](https://github.com/ultralytics/ultralytics) package for ONNX export or TensorRT API building. ``` shell pip install ultralytics ```5. Prepare your own PyTorch weight such as `yolov8s.pt` or `yolov8s-seg.pt`.***NOTICE:***Please use the latest `CUDA` and `TensorRT`, so that you can achieve the fastest speed !If you have to use a lower version of `CUDA` and `TensorRT`, please read the relevant issues carefully !# Normal UsageIf you get ONNX from origin [`ultralytics`](https://github.com/ultralytics/ultralytics) repo, you should build engine by yourself.You can only use the `c++` inference code to deserialize the engine and do inference.You can get more information in [`Normal.md`](docs/Normal.md) !Besides, other scripts won't work.# Export End2End ONNX with NMSYou can export your onnx model by `ultralytics` API and add postprocess such as bbox decoder and `NMS` into ONNX model at the same time.``` shellpython3 export-det.py \--weights yolov8s.pt \--iou-thres 0.65 \--conf-thres 0.25 \--topk 100 \--opset 11 \--sim \--input-shape 1 3 640 640 \--device cuda:0```#### Description of all arguments- `--weights` : The PyTorch model you trained.- `--iou-thres` : IOU threshold for NMS plugin.- `--conf-thres` : Confidence threshold for NMS plugin.- `--topk` : Max number of detection bboxes.- `--opset` : ONNX opset version, default is 11.- `--sim` : Whether to simplify your onnx model.- `--input-shape` : Input shape for you model, should be 4 dimensions.- `--device` : The CUDA deivce you export engine .You will get an onnx model whose prefix is the same as input weights.### Just Taste FirstIf you just want to taste first, you can download the onnx model which are exported by `YOLOv8` package and modified by me.[**YOLOv8-n**](https://triplemu-shared.oss-cn-beijing.aliyuncs.com/models/yolov8n.onnx?OSSAccessKeyId=LTAI5tNk9iiMqhFC64jCcgpv&Expires=2690974569&Signature=3ct9pnRygBduWdgAtfKOQAt4PeU%3D)[**YOLOv8-s**](https://triplemu-shared.oss-cn-beijing.aliyuncs.com/models/yolov8s.onnx?OSSAccessKeyId=LTAI5tNk9iiMqhFC64jCcgpv&Expires=10000000001690974000&Signature=cbHjUwmRsYdvilcirzjBI6%2BzmvI%3D)[**YOLOv8-m**](https://triplemu-shared.oss-cn-beijing.aliyuncs.com/models/yolov8m.onnx?OSSAccessKeyId=LTAI5tNk9iiMqhFC64jCcgpv&Expires=101690974603&Signature=XnJnQqbKsnJSKSgqVQ41kxoeETU%3D)[**YOLOv8-l**](https://triplemu-shared.oss-cn-beijing.aliyuncs.com/models/yolov8l.onnx?OSSAccessKeyId=LTAI5tNk9iiMqhFC64jCcgpv&Expires=2690974619&Signature=djxvNzcaFosHrMS5ylWh1R0%2Ff8E%3D)[**YOLOv8-x**](https://triplemu-shared.oss-cn-beijing.aliyuncs.com/models/yolov8x.onnx?OSSAccessKeyId=LTAI5tNk9iiMqhFC64jCcgpv&Expires=2690974637&Signature=DMmuT2wlfBzai%2BBpYJFcmNbkMKU%3D)# Build End2End Engine from ONNX### 1. Build Engine by TensorRT ONNX Python apiYou can export TensorRT engine from ONNX by [`build.py` ](build.py).Usage:``` shellpython3 build.py \--weights yolov8s.onnx \--iou-thres 0.65 \--conf-thres 0.25 \--topk 100 \--fp16 \--device cuda:0```#### Description of all arguments- `--weights` : The ONNX model you download.- `--iou-thres` : IOU threshold for NMS plugin.- `--conf-thres` : Confidence threshold for NMS plugin.- `--topk` : Max number of detection bboxes.- `--fp16` : Whether to export half-precision engine.- `--device` : The CUDA deivce you export engine .You can modify `iou-thres` `conf-thres` `topk` by yourself.### 2. Export Engine by Trtexec ToolsYou can export TensorRT engine by [`trtexec`](https://github.com/NVIDIA/TensorRT/tree/main/samples/trtexec) tools.Usage:``` shell/usr/src/tensorrt/bin/trtexec \--onnx=yolov8s.onnx \--saveEngine=yolov8s.engine \--fp16```**If you installed TensorRT by a debian package, then the installation path of `trtexec`is `/usr/src/tensorrt/bin/trtexec`****If you installed TensorRT by a tar package, then the installation path of `trtexec` is under the `bin` folder in the path you decompressed**# Build TensorRT Engine by TensorRT APIPlease see more information in [`API-Build.md`](docs/API-Build.md)***Notice !!!*** We don't support YOLOv8-seg model now !!!# Inference## 1. Infer with python scriptYou can infer images with the engine by [`infer-det.py`](infer-det.py) .Usage:``` shellpython3 infer-det.py \--engine yolov8s.engine \--imgs data \--show \--out-dir outputs \--device cuda:0```#### Description of all arguments- `--engine` : The Engine you export.- `--imgs` : The images path you want to detect.- `--show` : Whether to show detection results.- `--out-dir` : Where to save detection results images. It will not work when use `--show` flag.- `--device` : The CUDA deivce you use.- `--profile` : Profile the TensorRT engine.## 2. Infer with C++You can infer with c++ in [`csrc/detect/end2end`](csrc/detect/end2end) .### Build:Please set you own librarys in [`CMakeLists.txt`](csrc/detect/end2end/CMakeLists.txt) and modify `CLASS_NAMES` and `COLORS` in [`main.cpp`](csrc/detect/end2end/main.cpp).``` shellexport root=${PWD}cd csrc/detect/end2endmkdir -p build && cd buildcmake ..makemv yolov8 ${root}cd ${root}```Usage:``` shell# infer image./yolov8 yolov8s.engine data/bus.jpg# infer images./yolov8 yolov8s.engine data# infer video./yolov8 yolov8s.engine data/test.mp4 # the video path```# TensorRT Segment DeployPlease see more information in [`Segment.md`](docs/Segment.md)# TensorRT Pose DeployPlease see more information in [`Pose.md`](docs/Pose.md)# DeepStream Detection DeploySee more in [`README.md`](csrc/deepstream/README.md)# Jetson DeployOnly test on `Jetson-NX 4GB`.See more in [`Jetson.md`](docs/Jetson.md)# Profile you engineIf you want to profile the TensorRT engine:Usage:``` shellpython3 trt-profile.py --engine yolov8s.engine --device cuda:0```# Refuse To Use PyTorch for Model Inference !!!If you need to break away from pytorch and use tensorrt inference,you can get more information in [`infer-det-without-torch.py`](infer-det-without-torch.py),the usage is the same as the pytorch version, but its performance is much worse.You can use `cuda-python` or `pycuda` for inference.Please install by such command:```shellpip install cuda-python# orpip install pycuda```Usage:``` shellpython3 infer-det-without-torch.py \--engine yolov8s.engine \--imgs data \--show \--out-dir outputs \--method cudart```#### Description of all arguments- `--engine` : The Engine you export.- `--imgs` : The i

用户评论 (0)

发表评论

captcha