yolov5_stereo_Pro.zip
大小:71.18MB
价格:26积分
下载量:0
评分:
5.0
上传者:qq_40700822
更新日期:2025-09-22

改进版的yolov5+双目测距

资源文件列表(大概)

文件名
大小
yolov5_stereo_Pro/
-
yolov5_stereo_Pro/LICENSE
34.3KB
yolov5_stereo_Pro/detect_and_stereo_video_030.py
28.78KB
yolov5_stereo_Pro/detect_and_stereo_video_033.py
30.95KB
yolov5_stereo_Pro/cuda_test.py
1.02KB
yolov5_stereo_Pro/README.md
10.55KB
yolov5_stereo_Pro/train.py
31.51KB
yolov5_stereo_Pro/test.py
16.14KB
yolov5_stereo_Pro/tutorial.ipynb
384.14KB
yolov5_stereo_Pro/Dockerfile
1.68KB
yolov5_stereo_Pro/detect.py
8.03KB
yolov5_stereo_Pro/hubconf.py
5.15KB
yolov5_stereo_Pro/code.txt
103B
yolov5_stereo_Pro/requirements.txt
610B
yolov5_stereo_Pro/models/
-
yolov5_stereo_Pro/models/yolov5s.yaml
1.33KB
yolov5_stereo_Pro/models/experimental.py
5.03KB
yolov5_stereo_Pro/models/__init__.py
-
yolov5_stereo_Pro/models/yolov5m.yaml
1.33KB
yolov5_stereo_Pro/models/yolo.py
11.78KB
yolov5_stereo_Pro/models/export.py
4.32KB
yolov5_stereo_Pro/models/common.py
12.69KB
yolov5_stereo_Pro/models/yolov5l.yaml
1.33KB
yolov5_stereo_Pro/models/yolov5x.yaml
1.33KB
yolov5_stereo_Pro/models/__pycache__/
-
yolov5_stereo_Pro/models/__pycache__/yolo.cpython-36.pyc
9.9KB
yolov5_stereo_Pro/models/__pycache__/experimental.cpython-37.pyc
5.65KB
yolov5_stereo_Pro/models/__pycache__/common.cpython-36.pyc
14.57KB
yolov5_stereo_Pro/models/__pycache__/__init__.cpython-38.pyc
142B
yolov5_stereo_Pro/models/__pycache__/experimental.cpython-36.pyc
5.69KB
yolov5_stereo_Pro/models/__pycache__/__init__.cpython-37.pyc
138B
yolov5_stereo_Pro/models/__pycache__/experimental.cpython-38.pyc
5.56KB
yolov5_stereo_Pro/models/__pycache__/yolo.cpython-38.pyc
9.78KB
yolov5_stereo_Pro/models/__pycache__/yolo.cpython-37.pyc
9.78KB
yolov5_stereo_Pro/models/__pycache__/common.cpython-37.pyc
14.49KB
yolov5_stereo_Pro/models/__pycache__/common.cpython-38.pyc
14.14KB
yolov5_stereo_Pro/models/__pycache__/__init__.cpython-36.pyc
162B
yolov5_stereo_Pro/models/hub/
-
yolov5_stereo_Pro/models/hub/yolov3-tiny.yaml
1.17KB
yolov5_stereo_Pro/models/hub/yolov5s6.yaml
1.93KB
yolov5_stereo_Pro/models/hub/yolov5-panet.yaml
1.42KB
yolov5_stereo_Pro/models/hub/yolov5-fpn.yaml
1.22KB
yolov5_stereo_Pro/models/hub/yolov5x6.yaml
1.93KB
yolov5_stereo_Pro/models/hub/yolov5-p7.yaml
2.18KB
yolov5_stereo_Pro/models/hub/anchors.yaml
3.28KB
yolov5_stereo_Pro/models/hub/yolov3-spp.yaml
1.5KB
yolov5_stereo_Pro/models/hub/yolov5m6.yaml
1.93KB
yolov5_stereo_Pro/models/hub/yolov5l6.yaml
1.93KB
yolov5_stereo_Pro/models/hub/yolov3.yaml
1.49KB
yolov5_stereo_Pro/models/hub/yolov5-p6.yaml
1.77KB
yolov5_stereo_Pro/models/hub/yolov5-p2.yaml
1.7KB
yolov5_stereo_Pro/stereo/
-
yolov5_stereo_Pro/stereo/dianyuntu_yolo.py
8.64KB
yolov5_stereo_Pro/stereo/stereoconfig_040_2.py
1.55KB
yolov5_stereo_Pro/stereo/stereo.py
12.4KB
yolov5_stereo_Pro/stereo/dianyuntu.py
8.59KB
yolov5_stereo_Pro/stereo/yolo/
-
yolov5_stereo_Pro/stereo/__pycache__/
-
yolov5_stereo_Pro/stereo/__pycache__/stereo.cpython-36.pyc
4.49KB
yolov5_stereo_Pro/stereo/__pycache__/stereoconfig_Bud.cpython-36.pyc
1.12KB
yolov5_stereo_Pro/stereo/__pycache__/stereoconfig_040_2.cpython-36.pyc
1.15KB
yolov5_stereo_Pro/stereo/__pycache__/dianyuntu_yolo.cpython-36.pyc
4.44KB
yolov5_stereo_Pro/data/
-
yolov5_stereo_Pro/data/hyp.finetune.yaml
846B
yolov5_stereo_Pro/data/coco128.yaml
1.51KB
yolov5_stereo_Pro/data/argoverse_hd.yaml
849B
yolov5_stereo_Pro/data/coco.yaml
1.7KB
yolov5_stereo_Pro/data/voc.yaml
738B
yolov5_stereo_Pro/data/hyp.scratch.yaml
1.53KB
yolov5_stereo_Pro/data/video/
-
yolov5_stereo_Pro/data/video/gym_001.mov
31.95MB
yolov5_stereo_Pro/data/scripts/
-
yolov5_stereo_Pro/data/scripts/get_voc.sh
4.33KB
yolov5_stereo_Pro/data/scripts/get_argoverse_hd.sh
1.97KB
yolov5_stereo_Pro/data/scripts/get_coco.sh
963B
yolov5_stereo_Pro/data/images/
-
yolov5_stereo_Pro/data/images/zidane.jpg
164.99KB
yolov5_stereo_Pro/data/images/bus.jpg
476.01KB
yolov5_stereo_Pro/__pycache__/
-
yolov5_stereo_Pro/__pycache__/test.cpython-36.pyc
10.63KB
yolov5_stereo_Pro/utils/
-
yolov5_stereo_Pro/utils/general.py
23.35KB
yolov5_stereo_Pro/utils/autoanchor.py
6.78KB
yolov5_stereo_Pro/utils/activations.py
2.2KB
yolov5_stereo_Pro/utils/__init__.py
-
yolov5_stereo_Pro/utils/torch_utils.py
11.68KB
yolov5_stereo_Pro/utils/loss.py
9.18KB
yolov5_stereo_Pro/utils/google_utils.py
4.76KB
yolov5_stereo_Pro/utils/metrics.py
8.76KB
yolov5_stereo_Pro/utils/datasets.py
43.14KB
yolov5_stereo_Pro/utils/plots.py
17.7KB
yolov5_stereo_Pro/utils/aws/
-
yolov5_stereo_Pro/utils/aws/mime.sh
780B
yolov5_stereo_Pro/utils/aws/__init__.py
-
yolov5_stereo_Pro/utils/aws/resume.py
1.09KB
yolov5_stereo_Pro/utils/aws/userdata.sh
1.21KB
yolov5_stereo_Pro/utils/google_app_engine/
-
yolov5_stereo_Pro/utils/google_app_engine/Dockerfile
821B
yolov5_stereo_Pro/utils/google_app_engine/app.yaml
173B
yolov5_stereo_Pro/utils/google_app_engine/additional_requirements.txt
105B
yolov5_stereo_Pro/utils/__pycache__/
-
yolov5_stereo_Pro/utils/__pycache__/autoanchor.cpython-36.pyc
5.94KB
yolov5_stereo_Pro/utils/__pycache__/__init__.cpython-36.pyc
161B
yolov5_stereo_Pro/utils/__pycache__/general.cpython-36.pyc
18.76KB
yolov5_stereo_Pro/utils/__pycache__/torch_utils.cpython-36.pyc
10.74KB
yolov5_stereo_Pro/utils/__pycache__/datasets.cpython-37.pyc
32.61KB
yolov5_stereo_Pro/utils/__pycache__/metrics.cpython-37.pyc
7.48KB
yolov5_stereo_Pro/utils/__pycache__/datasets.cpython-38.pyc
32.47KB
yolov5_stereo_Pro/utils/__pycache__/metrics.cpython-38.pyc
7.42KB
yolov5_stereo_Pro/utils/__pycache__/activations.cpython-36.pyc
3.36KB
yolov5_stereo_Pro/utils/__pycache__/activations.cpython-37.pyc
3.37KB
yolov5_stereo_Pro/utils/__pycache__/plots.cpython-37.pyc
15.53KB
yolov5_stereo_Pro/utils/__pycache__/plots.cpython-38.pyc
15.34KB
yolov5_stereo_Pro/utils/__pycache__/activations.cpython-38.pyc
3.33KB
yolov5_stereo_Pro/utils/__pycache__/google_utils.cpython-37.pyc
3.15KB
yolov5_stereo_Pro/utils/__pycache__/metrics.cpython-36.pyc
7.51KB
yolov5_stereo_Pro/utils/__pycache__/google_utils.cpython-38.pyc
3.19KB
yolov5_stereo_Pro/utils/__pycache__/torch_utils.cpython-38.pyc
10.73KB
yolov5_stereo_Pro/utils/__pycache__/__init__.cpython-38.pyc
141B
yolov5_stereo_Pro/utils/__pycache__/general.cpython-38.pyc
18.73KB
yolov5_stereo_Pro/utils/__pycache__/general.cpython-37.pyc
18.68KB
yolov5_stereo_Pro/utils/__pycache__/google_utils.cpython-36.pyc
3.19KB
yolov5_stereo_Pro/utils/__pycache__/datasets.cpython-36.pyc
32.76KB
yolov5_stereo_Pro/utils/__pycache__/__init__.cpython-37.pyc
137B
yolov5_stereo_Pro/utils/__pycache__/torch_utils.cpython-37.pyc
10.69KB
yolov5_stereo_Pro/utils/__pycache__/plots.cpython-36.pyc
15.63KB
yolov5_stereo_Pro/utils/__pycache__/loss.cpython-36.pyc
6.37KB
yolov5_stereo_Pro/utils/__pycache__/autoanchor.cpython-38.pyc
5.84KB
yolov5_stereo_Pro/utils/__pycache__/autoanchor.cpython-37.pyc
5.88KB
yolov5_stereo_Pro/utils/wandb_logging/
-
yolov5_stereo_Pro/utils/wandb_logging/wandb_utils.py
6.73KB
yolov5_stereo_Pro/utils/wandb_logging/__init__.py
-
yolov5_stereo_Pro/utils/wandb_logging/log_dataset.py
1.71KB
yolov5_stereo_Pro/weights/
-
yolov5_stereo_Pro/weights/download_weights.sh
277B
yolov5_stereo_Pro/weights/yolov5s/
-
yolov5_stereo_Pro/weights/yolov5s/yolov5s.pt
14.11MB
yolov5_stereo_Pro/weights/person/
-
yolov5_stereo_Pro/weights/person/last_person_1000.pt
13.73MB
yolov5_stereo_Pro/weights/person/last_person_300.pt
13.72MB
yolov5_stereo_Pro/runs/
-
yolov5_stereo_Pro/runs/detect/
-

资源内容介绍

新版本代码特点:(注意目前只适用于2560*720分辨率的双目,其他分辨率需要修改)1、替换“回”字形查找改为“米”字形查找,可以设置存储像素点的个数20可修改,然后取有效像素点的中位数(个人觉得比平均值更有代表性)。2、每10帧(约1/3秒)双目匹配一次,提升代码的运行速度。3、可以进行实时检测,运行速度与机器的性能有关。
<a href="https://apps.apple.com/app/id1452689527" target="_blank"><img src="https://user-images.githubusercontent.com/26833433/98699617-a1595a00-2377-11eb-8145-fc674eb9b1a7.jpg" width="1000"></a>&nbsp<a href="https://github.com/ultralytics/yolov5/actions"><img src="https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg" alt="CI CPU testing"></a>This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk.<img src="https://user-images.githubusercontent.com/26833433/103594689-455e0e00-4eae-11eb-9cdf-7d753e2ceeeb.png" width="1000">** GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.- **January 5, 2021**: [v4.0 release](https://github.com/ultralytics/yolov5/releases/tag/v4.0): nn.SiLU() activations, [Weights & Biases](https://wandb.ai/) logging, [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/) integration.- **August 13, 2020**: [v3.0 release](https://github.com/ultralytics/yolov5/releases/tag/v3.0): nn.Hardswish() activations, data autodownload, native AMP.- **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP.- **June 22, 2020**: [PANet](https://arxiv.org/abs/1803.01534) updates: new heads, reduced parameters, improved speed and mAP [364fcfd](https://github.com/ultralytics/yolov5/commit/364fcfd7dba53f46edd4f04c037a039c0a287972).- **June 19, 2020**: [FP16](https://pytorch.org/docs/stable/nn.html#torch.nn.Module.half) as new default for smaller checkpoints and faster inference [d4c6674](https://github.com/ultralytics/yolov5/commit/d4c6674c98e19df4c40e33a777610a18d1961145).## Pretrained Checkpoints| Model | size | AP<sup>val</sup> | AP<sup>test</sup> | AP<sub>50</sub> | Speed<sub>V100</sub> | FPS<sub>V100</sub> || params | GFLOPS ||---------- |------ |------ |------ |------ | -------- | ------| ------ |------ | :------: || [YOLOv5s](https://github.com/ultralytics/yolov5/releases) |640 |36.8 |36.8 |55.6 |**2.2ms** |**455** ||7.3M |17.0| [YOLOv5m](https://github.com/ultralytics/yolov5/releases) |640 |44.5 |44.5 |63.1 |2.9ms |345 ||21.4M |51.3| [YOLOv5l](https://github.com/ultralytics/yolov5/releases) |640 |48.1 |48.1 |66.4 |3.8ms |264 ||47.0M |115.4| [YOLOv5x](https://github.com/ultralytics/yolov5/releases) |640 |**50.1** |**50.1** |**68.7** |6.0ms |167 ||87.7M |218.8| | | | | | | || || [YOLOv5x](https://github.com/ultralytics/yolov5/releases) + TTA |832 |**51.9** |**51.9** |**69.6** |24.9ms |40 ||87.7M |1005.3<!--- | [YOLOv5l6](https://github.com/ultralytics/yolov5/releases) |640 |49.0 |49.0 |67.4 |4.1ms |244 ||77.2M |117.7| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases) |1280 |53.0 |53.0 |70.8 |12.3ms |81 ||77.2M |117.7--->** AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy. ** All AP numbers are for single-model single-scale without ensemble or TTA. **Reproduce mAP** by `python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` ** Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes image preprocessing, FP16 inference, postprocessing and NMS. NMS is 1-2ms/img. **Reproduce speed** by `python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45` ** All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation). ** Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) runs at 3 image sizes. **Reproduce TTA** by `python test.py --data coco.yaml --img 832 --iou 0.65 --augment` ## RequirementsPython 3.8 or later with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) dependencies installed, including `torch>=1.7`. To install run:```bash$ pip install -r requirements.txt```## Tutorials* [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)&nbsp; 🚀 RECOMMENDED* [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)&nbsp; 🌟 NEW* [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)* [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)&nbsp; ⭐ NEW* [ONNX and TorchScript Export](https://github.com/ultralytics/yolov5/issues/251)* [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303)* [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318)* [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304)* [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607)* [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314)&nbsp; ⭐ NEW* [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx)## EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):- **Google Colab and Kaggle** notebooks with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>## Inferencedetect.py runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.```bash$ python detect.py --source 0 # webcam file.jpg # image file.mp4 # video path/ # directory path/*.jpg # glob rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp stream rtmp://192.168.1.105/live/test # rtmp stream http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream```To run inference on example images in `data/images`:```bash$ python detect.py --source data/images --weights yolov5s.pt --conf 0.25Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', project='runs/detect', save_conf=False, save_txt=False, source='data/images/', update=False, view_img=False, weights=['yolov5s.pt'])YOLOv5 v4.0-96-g83dc1b4 torch 1.7.0+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB

用户评论 (0)

发表评论

captcha