YOLOv8-TensorRT-main.zip
大小:44.32MB
价格:10积分
下载量:0
评分:
5.0
上传者:weixin_62918638
更新日期:2025-09-22

yolov8 tensorrt python推理

资源文件列表(大概)

文件名
大小
YOLOv8-TensorRT-main/
-
YOLOv8-TensorRT-main/build.py
1.87KB
YOLOv8-TensorRT-main/cmd.txt
864B
YOLOv8-TensorRT-main/config.py
17.9KB
YOLOv8-TensorRT-main/csrc/
-
YOLOv8-TensorRT-main/csrc/cls/
-
YOLOv8-TensorRT-main/csrc/cls/normal/
-
YOLOv8-TensorRT-main/csrc/cls/normal/cmake/
-
YOLOv8-TensorRT-main/csrc/cls/normal/cmake/FindTensorRT.cmake
5.17KB
YOLOv8-TensorRT-main/csrc/cls/normal/cmake/Function.cmake
408B
YOLOv8-TensorRT-main/csrc/cls/normal/CMakeLists.txt
1.85KB
YOLOv8-TensorRT-main/csrc/cls/normal/include/
-
YOLOv8-TensorRT-main/csrc/cls/normal/include/common.hpp
3.52KB
YOLOv8-TensorRT-main/csrc/cls/normal/include/filesystem.hpp
187.37KB
YOLOv8-TensorRT-main/csrc/cls/normal/include/yolov8-cls.hpp
7.37KB
YOLOv8-TensorRT-main/csrc/cls/normal/main.cpp
61.69KB
YOLOv8-TensorRT-main/csrc/deepstream/
-
YOLOv8-TensorRT-main/csrc/deepstream/CMakeLists.txt
1.52KB
YOLOv8-TensorRT-main/csrc/deepstream/config_yoloV8.txt
3.06KB
YOLOv8-TensorRT-main/csrc/deepstream/custom_bbox_parser/
-
YOLOv8-TensorRT-main/csrc/deepstream/custom_bbox_parser/nvdsparsebbox_yoloV8.cpp
4.77KB
YOLOv8-TensorRT-main/csrc/deepstream/deepstream_app_config.txt
2.56KB
YOLOv8-TensorRT-main/csrc/deepstream/labels.txt
625B
YOLOv8-TensorRT-main/csrc/deepstream/README.md
2.08KB
YOLOv8-TensorRT-main/csrc/detect/
-
YOLOv8-TensorRT-main/csrc/detect/end2end/
-
YOLOv8-TensorRT-main/csrc/detect/end2end/cmake/
-
YOLOv8-TensorRT-main/csrc/detect/end2end/cmake/FindTensorRT.cmake
5.17KB
YOLOv8-TensorRT-main/csrc/detect/end2end/cmake/Function.cmake
408B
YOLOv8-TensorRT-main/csrc/detect/end2end/CMakeLists.txt
1.85KB
YOLOv8-TensorRT-main/csrc/detect/end2end/include/
-
YOLOv8-TensorRT-main/csrc/detect/end2end/include/common.hpp
3.71KB
YOLOv8-TensorRT-main/csrc/detect/end2end/include/filesystem.hpp
187.37KB
YOLOv8-TensorRT-main/csrc/detect/end2end/include/yolov8.hpp
11.8KB
YOLOv8-TensorRT-main/csrc/detect/end2end/main.cpp
5.6KB
YOLOv8-TensorRT-main/csrc/detect/normal/
-
YOLOv8-TensorRT-main/csrc/detect/normal/cmake/
-
YOLOv8-TensorRT-main/csrc/detect/normal/cmake/FindTensorRT.cmake
5.17KB
YOLOv8-TensorRT-main/csrc/detect/normal/cmake/Function.cmake
408B
YOLOv8-TensorRT-main/csrc/detect/normal/CMakeLists.txt
1.99KB
YOLOv8-TensorRT-main/csrc/detect/normal/include/
-
YOLOv8-TensorRT-main/csrc/detect/normal/include/common.hpp
3.71KB
YOLOv8-TensorRT-main/csrc/detect/normal/include/filesystem.hpp
187.37KB
YOLOv8-TensorRT-main/csrc/detect/normal/include/yolov8.hpp
13.2KB
YOLOv8-TensorRT-main/csrc/detect/normal/main.cpp
5.81KB
YOLOv8-TensorRT-main/csrc/jetson/
-
YOLOv8-TensorRT-main/csrc/jetson/detect/
-
YOLOv8-TensorRT-main/csrc/jetson/detect/CMakeLists.txt
1.53KB
YOLOv8-TensorRT-main/csrc/jetson/detect/include/
-
YOLOv8-TensorRT-main/csrc/jetson/detect/include/common.hpp
4.34KB
YOLOv8-TensorRT-main/csrc/jetson/detect/include/yolov8.hpp
9.75KB
YOLOv8-TensorRT-main/csrc/jetson/detect/main.cpp
5.41KB
YOLOv8-TensorRT-main/csrc/jetson/pose/
-
YOLOv8-TensorRT-main/csrc/jetson/pose/CMakeLists.txt
1.68KB
YOLOv8-TensorRT-main/csrc/jetson/pose/include/
-
YOLOv8-TensorRT-main/csrc/jetson/pose/include/common.hpp
4.37KB
YOLOv8-TensorRT-main/csrc/jetson/pose/include/yolov8-pose.hpp
12.65KB
YOLOv8-TensorRT-main/csrc/jetson/pose/main.cpp
6.81KB
YOLOv8-TensorRT-main/csrc/jetson/segment/
-
YOLOv8-TensorRT-main/csrc/jetson/segment/CMakeLists.txt
1.68KB
YOLOv8-TensorRT-main/csrc/jetson/segment/include/
-
YOLOv8-TensorRT-main/csrc/jetson/segment/include/common.hpp
4.37KB
YOLOv8-TensorRT-main/csrc/jetson/segment/include/yolov8-seg.hpp
12.7KB
YOLOv8-TensorRT-main/csrc/jetson/segment/main.cpp
6.17KB
YOLOv8-TensorRT-main/csrc/obb/
-
YOLOv8-TensorRT-main/csrc/obb/normal/
-
YOLOv8-TensorRT-main/csrc/obb/normal/cmake/
-
YOLOv8-TensorRT-main/csrc/obb/normal/cmake/FindTensorRT.cmake
5.17KB
YOLOv8-TensorRT-main/csrc/obb/normal/cmake/Function.cmake
408B
YOLOv8-TensorRT-main/csrc/obb/normal/CMakeLists.txt
1.84KB
YOLOv8-TensorRT-main/csrc/obb/normal/include/
-
YOLOv8-TensorRT-main/csrc/obb/normal/include/common.hpp
3.7KB
YOLOv8-TensorRT-main/csrc/obb/normal/include/filesystem.hpp
187.37KB
YOLOv8-TensorRT-main/csrc/obb/normal/include/yolov8-obb.hpp
11.19KB
YOLOv8-TensorRT-main/csrc/obb/normal/main.cpp
5.09KB
YOLOv8-TensorRT-main/csrc/pose/
-
YOLOv8-TensorRT-main/csrc/pose/normal/
-
YOLOv8-TensorRT-main/csrc/pose/normal/cmake/
-
YOLOv8-TensorRT-main/csrc/pose/normal/cmake/FindTensorRT.cmake
5.17KB
YOLOv8-TensorRT-main/csrc/pose/normal/cmake/Function.cmake
408B
YOLOv8-TensorRT-main/csrc/pose/normal/CMakeLists.txt
1.99KB
YOLOv8-TensorRT-main/csrc/pose/normal/include/
-
YOLOv8-TensorRT-main/csrc/pose/normal/include/common.hpp
3.74KB
YOLOv8-TensorRT-main/csrc/pose/normal/include/filesystem.hpp
187.37KB
YOLOv8-TensorRT-main/csrc/pose/normal/include/yolov8-pose.hpp
12.69KB
YOLOv8-TensorRT-main/csrc/pose/normal/main.cpp
6.98KB
YOLOv8-TensorRT-main/csrc/segment/
-
YOLOv8-TensorRT-main/csrc/segment/normal/
-
YOLOv8-TensorRT-main/csrc/segment/normal/CMakeLists.txt
1.7KB
YOLOv8-TensorRT-main/csrc/segment/normal/include/
-
YOLOv8-TensorRT-main/csrc/segment/normal/include/common.hpp
4.37KB
YOLOv8-TensorRT-main/csrc/segment/normal/include/yolov8-seg.hpp
13.59KB
YOLOv8-TensorRT-main/csrc/segment/normal/main.cpp
6.18KB
YOLOv8-TensorRT-main/csrc/segment/simple/
-
YOLOv8-TensorRT-main/csrc/segment/simple/CMakeLists.txt
1.7KB
YOLOv8-TensorRT-main/csrc/segment/simple/include/
-
YOLOv8-TensorRT-main/csrc/segment/simple/include/common.hpp
4.37KB
YOLOv8-TensorRT-main/csrc/segment/simple/include/yolov8-seg.hpp
12.74KB
YOLOv8-TensorRT-main/csrc/segment/simple/main.cpp
6.18KB
YOLOv8-TensorRT-main/data/
-
YOLOv8-TensorRT-main/data/bus.jpg
476.01KB
YOLOv8-TensorRT-main/data/highway.mp4
3.74MB
YOLOv8-TensorRT-main/data/zidane.jpg
164.99KB
YOLOv8-TensorRT-main/docs/
-
YOLOv8-TensorRT-main/docs/API-Build.md
719B
YOLOv8-TensorRT-main/docs/Cls.md
2.61KB
YOLOv8-TensorRT-main/docs/Jetson.md
4.66KB
YOLOv8-TensorRT-main/docs/Normal.md
2.11KB
YOLOv8-TensorRT-main/docs/Obb.md
2.73KB
YOLOv8-TensorRT-main/docs/Pose.md
2.89KB
YOLOv8-TensorRT-main/docs/Segment.md
6.22KB
YOLOv8-TensorRT-main/docs/star.md
172B
YOLOv8-TensorRT-main/export-det.py
3.06KB
YOLOv8-TensorRT-main/infer-det-without-torch.py
2.97KB
YOLOv8-TensorRT-main/infer-det.py
3.12KB
YOLOv8-TensorRT-main/models/
-
YOLOv8-TensorRT-main/models/api.py
13.45KB
YOLOv8-TensorRT-main/models/common.py
6.26KB
YOLOv8-TensorRT-main/models/cudart_api.py
6.02KB
YOLOv8-TensorRT-main/models/engine.py
14.13KB
YOLOv8-TensorRT-main/models/pycuda_api.py
5.21KB
YOLOv8-TensorRT-main/models/torch_utils.py
3.96KB
YOLOv8-TensorRT-main/models/utils.py
8.48KB
YOLOv8-TensorRT-main/models/__init__.py
556B
YOLOv8-TensorRT-main/models/__pycache__/
-
YOLOv8-TensorRT-main/models/__pycache__/common.cpython-39.pyc
6.82KB
YOLOv8-TensorRT-main/models/__pycache__/cudart_api.cpython-39.pyc
5.06KB
YOLOv8-TensorRT-main/models/__pycache__/engine.cpython-39.pyc
12.13KB
YOLOv8-TensorRT-main/models/__pycache__/torch_utils.cpython-39.pyc
3.31KB
YOLOv8-TensorRT-main/models/__pycache__/utils.cpython-39.pyc
7.25KB
YOLOv8-TensorRT-main/models/__pycache__/__init__.cpython-39.pyc
554B
YOLOv8-TensorRT-main/onnx_inference.py
3.94KB
YOLOv8-TensorRT-main/output/
-
YOLOv8-TensorRT-main/output/output_video.mp4
16.92MB
YOLOv8-TensorRT-main/README.md
7.12KB
YOLOv8-TensorRT-main/requirements.txt
158B
YOLOv8-TensorRT-main/trt-profile.py
767B
YOLOv8-TensorRT-main/video_inference.py
4.32KB
YOLOv8-TensorRT-main/video_inference_without_torch.py
3.78KB
YOLOv8-TensorRT-main/weights/
-
YOLOv8-TensorRT-main/weights/yolov8n.engine
9.58MB
YOLOv8-TensorRT-main/weights/yolov8n.onnx
12.24MB
YOLOv8-TensorRT-main/weights/yolov8n.pt
6.23MB
YOLOv8-TensorRT-main/__pycache__/
-
YOLOv8-TensorRT-main/__pycache__/config.cpython-39.pyc
14.56KB

资源内容介绍

yolov8 tensorrt python推理
# YOLOv8-TensorRT`YOLOv8` using TensorRT accelerate !---[![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fatrox%2Fsync-dotenv%2Fbadge&style=flat)](https://github.com/triple-Mu/YOLOv8-TensorRT)[![Python Version](https://img.shields.io/badge/Python-3.8--3.10-FFD43B?logo=python)](https://github.com/triple-Mu/YOLOv8-TensorRT)[![img](https://badgen.net/badge/icon/tensorrt?icon=azurepipelines&label)](https://developer.nvidia.com/tensorrt)[![C++](https://img.shields.io/badge/CPP-11%2F14-yellow)](https://github.com/triple-Mu/YOLOv8-TensorRT)[![img](https://badgen.net/github/license/triple-Mu/YOLOv8-TensorRT)](https://github.com/triple-Mu/YOLOv8-TensorRT/blob/main/LICENSE)[![img](https://badgen.net/github/prs/triple-Mu/YOLOv8-TensorRT)](https://github.com/triple-Mu/YOLOv8-TensorRT/pulls)[![img](https://img.shields.io/github/stars/triple-Mu/YOLOv8-TensorRT?color=ccf)](https://github.com/triple-Mu/YOLOv8-TensorRT)---# Prepare the environment1. Install `CUDA` follow [`CUDA official website`](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#download-the-nvidia-cuda-toolkit). 🚀 RECOMMENDED `CUDA` >= 11.42. Install `TensorRT` follow [`TensorRT official website`](https://developer.nvidia.com/nvidia-tensorrt-8x-download). 🚀 RECOMMENDED `TensorRT` >= 8.42. Install python requirements. ``` shell pip install -r requirements.txt ```3. Install [`ultralytics`](https://github.com/ultralytics/ultralytics) package for ONNX export or TensorRT API building. ``` shell pip install ultralytics ```5. Prepare your own PyTorch weight such as `yolov8s.pt` or `yolov8s-seg.pt`.***NOTICE:***Please use the latest `CUDA` and `TensorRT`, so that you can achieve the fastest speed !If you have to use a lower version of `CUDA` and `TensorRT`, please read the relevant issues carefully !# Normal UsageIf you get ONNX from origin [`ultralytics`](https://github.com/ultralytics/ultralytics) repo, you should build engine by yourself.You can only use the `c++` inference code to deserialize the engine and do inference.You can get more information in [`Normal.md`](docs/Normal.md) !Besides, other scripts won't work.# Export End2End ONNX with NMSYou can export your onnx model by `ultralytics` API and add postprocess such as bbox decoder and `NMS` into ONNX model at the same time.``` shellpython3 export-det.py \--weights yolov8s.pt \--iou-thres 0.65 \--conf-thres 0.25 \--topk 100 \--opset 11 \--sim \--input-shape 1 3 640 640 \--device cuda:0```#### Description of all arguments- `--weights` : The PyTorch model you trained.- `--iou-thres` : IOU threshold for NMS plugin.- `--conf-thres` : Confidence threshold for NMS plugin.- `--topk` : Max number of detection bboxes.- `--opset` : ONNX opset version, default is 11.- `--sim` : Whether to simplify your onnx model.- `--input-shape` : Input shape for you model, should be 4 dimensions.- `--device` : The CUDA deivce you export engine .You will get an onnx model whose prefix is the same as input weights.# Build End2End Engine from ONNX### 1. Build Engine by TensorRT ONNX Python apiYou can export TensorRT engine from ONNX by [`build.py` ](build.py).Usage:``` shellpython3 build.py \--weights yolov8s.onnx \--iou-thres 0.65 \--conf-thres 0.25 \--topk 100 \--fp16 \--device cuda:0```#### Description of all arguments- `--weights` : The ONNX model you download.- `--iou-thres` : IOU threshold for NMS plugin.- `--conf-thres` : Confidence threshold for NMS plugin.- `--topk` : Max number of detection bboxes.- `--fp16` : Whether to export half-precision engine.- `--device` : The CUDA deivce you export engine .You can modify `iou-thres` `conf-thres` `topk` by yourself.### 2. Export Engine by Trtexec ToolsYou can export TensorRT engine by [`trtexec`](https://github.com/NVIDIA/TensorRT/tree/main/samples/trtexec) tools.Usage:``` shell/usr/src/tensorrt/bin/trtexec \--onnx=yolov8s.onnx \--saveEngine=yolov8s.engine \--fp16```**If you installed TensorRT by a debian package, then the installation path of `trtexec`is `/usr/src/tensorrt/bin/trtexec`****If you installed TensorRT by a tar package, then the installation path of `trtexec` is under the `bin` folder in the path you decompressed**# Build TensorRT Engine by TensorRT APIPlease see more information in [`API-Build.md`](docs/API-Build.md)***Notice !!!*** We don't support YOLOv8-seg model now !!!# Inference## 1. Infer with python scriptYou can infer images with the engine by [`infer-det.py`](infer-det.py) .Usage:``` shellpython3 infer-det.py \--engine yolov8s.engine \--imgs data \--show \--out-dir outputs \--device cuda:0```#### Description of all arguments- `--engine` : The Engine you export.- `--imgs` : The images path you want to detect.- `--show` : Whether to show detection results.- `--out-dir` : Where to save detection results images. It will not work when use `--show` flag.- `--device` : The CUDA deivce you use.- `--profile` : Profile the TensorRT engine.## 2. Infer with C++You can infer with c++ in [`csrc/detect/end2end`](csrc/detect/end2end) .### Build:Please set you own librarys in [`CMakeLists.txt`](csrc/detect/end2end/CMakeLists.txt) and modify `CLASS_NAMES` and `COLORS` in [`main.cpp`](csrc/detect/end2end/main.cpp).``` shellexport root=${PWD}cd csrc/detect/end2endmkdir -p build && cd buildcmake ..makemv yolov8 ${root}cd ${root}```Usage:``` shell# infer image./yolov8 yolov8s.engine data/bus.jpg# infer images./yolov8 yolov8s.engine data# infer video./yolov8 yolov8s.engine data/test.mp4 # the video path```# TensorRT Segment DeployPlease see more information in [`Segment.md`](docs/Segment.md)# TensorRT Pose DeployPlease see more information in [`Pose.md`](docs/Pose.md)# TensorRT Cls DeployPlease see more information in [`Cls.md`](docs/Cls.md)# TensorRT Obb DeployPlease see more information in [`Obb.md`](docs/Obb.md)# DeepStream Detection DeploySee more in [`README.md`](csrc/deepstream/README.md)# Jetson DeployOnly test on `Jetson-NX 4GB`.See more in [`Jetson.md`](docs/Jetson.md)# Profile you engineIf you want to profile the TensorRT engine:Usage:``` shellpython3 trt-profile.py --engine yolov8s.engine --device cuda:0```# Refuse To Use PyTorch for Model Inference !!!If you need to break away from pytorch and use tensorrt inference,you can get more information in [`infer-det-without-torch.py`](infer-det-without-torch.py),the usage is the same as the pytorch version, but its performance is much worse.You can use `cuda-python` or `pycuda` for inference.Please install by such command:```shellpip install cuda-python# orpip install pycuda```Usage:``` shellpython3 infer-det-without-torch.py \--engine yolov8s.engine \--imgs data \--show \--out-dir outputs \--method cudart```#### Description of all arguments- `--engine` : The Engine you export.- `--imgs` : The images path you want to detect.- `--show` : Whether to show detection results.- `--out-dir` : Where to save detection results images. It will not work when use `--show` flag.- `--method` : Choose `cudart` or `pycuda`, default is `cudart`.

用户评论 (0)

发表评论

captcha

相关资源

InstantTexture.zip

InstantTexture.zip

8.69MB29积分

数据结构任务书.zip

数据结构任务书.zip

166.93KB40积分

ROS 2 密钥 ros.key

ROS 2 密钥 ros.key

1.42KB26积分

linkage-mapper3.0

Linkage Mapper是用于支撑区域野生动物栖息地连通性分析的GIS工具。它由几个Python脚本组成,打包为ArcGIS工具箱,可以自动绘制野生动物栖息地连接走廊。开发者为了支持2010年华盛顿野生动物栖息地连接工作,并将其公开用于其他野生动物的连接性评估。Linkage Mapper使用核心栖息地矢量图斑和阻力栅格来绘制核心区域之间的最低成本联系。阻力栅格中每一个像元都有一个反映能量消耗、移动难度或死亡风险的值。阻力值通常由像元特性决定,如土地覆盖或房屋密度,并结合特定的物种的景观阻力。当动物离开特定的核心栖息地时,成本加权分析会累积总运动阻力图。这些脚本使用ArcGIS和Python函数来识别相邻的核心区域,并在他们之间创建成本最低的走廊地图。这些脚本能够标准化和合并走廊地图,并形成一个综合走廊地图。结果显示,每个网格单元在提供核心区域之间的连接方面的相对价值,允许用户识别哪些走廊由遇到更多或更少的特性,这些特性有助于或阻碍核心区域之间的移动。目前的版本新开发的工具,可用于绘制走廊内的夹点(使用Circuitscape),绘制具有高度网络中心性的核心区域和走廊。

14.14MB27积分