X-AnyLabeling

不擅长编程的程序员ZIPX-AnyLabeling-main.zip  43.26MB

资源介绍:

人工智能领域近年来发展迅猛,其应用已经渗透到了社会生活的各个方面。随着技术的进步,对于大量数据的处理和分析需求也日益增长。数据标注作为机器学习流程中的重要步骤,其质量和效率直接影响到最终模型的性能。为了提升标注工作的效率,AI自动标注数据辅助工具应运而生。 X-AnyLabeling是一款专门设计用于简化和加速数据标注过程的工具。它的出现,极大地改变了数据处理的工作方式,尤其是在图像、语音、文本等不同类型数据的标注上。X-AnyLabeling的核心优势在于其能够利用AI技术,通过导入预先训练好的模型,自动执行数据标注任务。这不仅提高了标注工作的速度,还大幅度降低了人工标注的成本和错误率。 该工具不仅适用于已经拥有模型的用户,即使是初学者或者没有足够资源进行大规模模型训练的开发者,也可以利用X-AnyLabeling来尝试AI标注技术。用户可以通过上传自己的数据集,使用X-AnyLabeling中已经集成的先进模型进行标注,从而快速获得标注后的数据,用于后续的机器学习模型训练和验证。 在实际应用中,X-AnyLabeling表现出色,尤其在图像识别和分类领域。开发者可以通过简单配置,将深度学习算法中的图像识别模型嵌入到X-AnyLabeling中,使得工具能够自动识别图像中的对象,并进行标记。例如,在自动驾驶车辆的研究中,需要大量标注道路、行人、车辆等图像数据,使用X-AnyLabeling可以大大提高标注效率。 此外,X-AnyLabeling的设计充分考虑了用户体验,其界面友好,操作简便,即使是不具备深厚技术背景的用户也能够快速上手。其通过图形化界面展示数据和标注结果,使得整个标注过程直观易懂。同时,它还支持多用户协同工作,团队成员可以在同一平台上共享数据集和标注结果,促进了团队合作和知识共享。 X-AnyLabeling的推出,不仅是技术进步的体现,更是人工智能辅助工具发展的一个重要里程碑。它不仅提升了数据处理的效率,还为AI技术的普及和应用开辟了新的道路。随着AI技术的不断发展,X-AnyLabeling也会不断地更新迭代,引入更多先进的功能和算法,满足用户日益增长的需求。 X-AnyLabeling作为一款AI自动标注数据辅助工具,为数据标注工作提供了一个高效、准确的解决方案。它的应用范围广泛,不仅能够加速数据处理流程,还能够显著提高数据质量,为机器学习和人工智能的研究与应用提供了有力支持。

<div align="center"> <p> <a href="https://github.com/CVHub520/X-AnyLabeling/" target="_blank"> <img alt="X-AnyLabeling" height="200px" src="https://github.com/user-attachments/assets/0714a182-92bd-4b47-b48d-1c5d7c225176"></a> </p> [English](README.md) | [简体中文](README_zh-CN.md) </div> <p align="center"> <a href="./LICENSE"><img src="https://img.shields.io/badge/License-LGPL v3-blue.svg"></a> <a href=""><img src="https://img.shields.io/github/v/release/CVHub520/X-AnyLabeling?color=ffa"></a> <a href=""><img src="https://img.shields.io/badge/python-3.8+-aff.svg"></a> <a href=""><img src="https://img.shields.io/badge/os-linux, win, mac-pink.svg"></a> <a href="https://github.com/CVHub520/X-AnyLabeling/stargazers"><img src="https://img.shields.io/github/stars/CVHub520/X-AnyLabeling?color=ccf"></a> </p> ![](https://user-images.githubusercontent.com/18329471/234640541-a6a65fbc-d7a5-4ec3-9b65-55305b01a7aa.png) <img src="https://github.com/user-attachments/assets/0b1e3c69-a800-4497-9bad-4332c1ce1ebf" width="100%"> <div align="center"><strong>Segment Anything v2</strong></div> </br> | **Tracking by HBB Detection** | **Tracking by OBB Detection** | | :---: | :---: | | <img src="https://github.com/user-attachments/assets/be67d4f8-eb31-4bb3-887c-d954bb4a5d6d" width="100%"> | <img src="https://github.com/user-attachments/assets/d85b1102-124a-4971-9332-c51fd2b1c47b" width="100%"> | | **Tracking by Instance Segmentation** | **Tracking by Pose Estimation** | | <img src="https://github.com/user-attachments/assets/8d412dc6-62c7-4bb2-9a1e-026448acf2bf" width="100%"> | <img src="https://github.com/user-attachments/assets/bab038a7-3023-4097-bdcc-90e5009477c0" width="100%"> | ## 🥳 What's New - Sep. 2024: - Release version [2.4.2](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.4.2) - 🧸🧸🧸 Added support for image matting based on [RMBG v1.4 model](https://huggingface.co/briaai/RMBG-1.4). - 🔥🔥🔥 Added support for interactive video object tracking based on [Segment-Anything-2](https://github.com/CVHub520/segment-anything-2). [[Tutorial](examples/interactive_video_object_segmentation/README.md)] <br> <details> <summary>Click to view more news.</summary> - Aug. 2024: - Release version [2.4.1](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.4.1) - Support [tracking-by-det/obb/seg/pose](./examples/multiple_object_tracking/README.md) tasks. - Support [Segment-Anything-2](https://github.com/facebookresearch/segment-anything-2) model! (Recommended) - Support [Grounding-SAM2](./docs/en/model_zoo.md) model. - Support lightweight model for Japanese recognition. - Jul. 2024: - Add PPOCR-Recognition and KIE import/export functionality for training PP-OCR task. - Add ODVG import/export functionality for training grounding task. - Add support to annotate KIE linking field. - Support [RT-DETRv2](https://github.com/lyuwenyu/RT-DETR) model. - Support [Depth Anything v2](https://github.com/DepthAnything/Depth-Anything-V2) model. - Jun. 2024: - Support [YOLOv8-Pose](https://docs.ultralytics.com/tasks/pose/) model. - Add [yolo-pose](./docs/en/user_guide.md) import/export functionality. - May. 2024: - Support [YOLOv8-World](https://docs.ultralytics.com/models/yolo-world), [YOLOv8-oiv7](https://docs.ultralytics.com/models/yolov8), [YOLOv10](https://github.com/THU-MIG/yolov10) model. - Release version [2.3.6](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.3.6). - Add feature to display confidence score. - Mar. 2024: - Release version [2.3.5](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.3.5). - Feb. 2024: - Release version [2.3.4](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.3.4). - Enable label display feature. - Release version [2.3.3](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.3.3). - Release version [2.3.2](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.3.2). - Support [YOLOv9](https://github.com/WongKinYiu/yolov9) model. - Support the conversion from a horizontal bounding box to a rotated bounding box. - Supports label deletion and renaming. For more details, please refer to the [document](./docs/zh_cn/user_guide.md). - Support for quick tag correction is available; please refer to this [document](./docs/en/user_guide.md) for guidance. - Release version [2.3.1](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.3.1). - Jan. 2024: - Combining CLIP and SAM models for enhanced semantic and spatial understanding. An example can be found [here](./anylabeling/configs/auto_labeling/edge_sam_with_chinese_clip.yaml). - Add support for the [Depth Anything](https://github.com/LiheYoung/Depth-Anything.git) model in the depth estimation task. - Release version [2.3.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.3.0). - Support [YOLOv8-OBB](https://github.com/ultralytics/ultralytics) model. - Support [RTMDet](https://github.com/open-mmlab/mmyolo/tree/main/configs/rtmdet) and [RTMO](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose) model. - Release a [chinese license plate](https://github.com/we0091234/Chinese_license_plate_detection_recognition) detection and recognition model based on YOLOv5. - Dec. 2023: - Release version [2.2.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.2.0). - Support [EdgeSAM](https://github.com/chongzhou96/EdgeSAM) to optimize for efficient execution on edge devices with minimal performance compromise. - Support YOLOv5-Cls and YOLOv8-Cls model. - Nov. 2023: - Release version [2.1.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.1.0). - Support [InternImage](https://arxiv.org/abs/2211.05778) model (**CVPR'23**). - Release version [2.0.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.0.0). - Added support for Grounding-SAM, combining [GroundingDINO](https://github.com/wenyi5608/GroundingDINO) with [HQ-SAM](https://github.com/SysCV/sam-hq) to achieve sota zero-shot high-quality predictions! - Enhanced support for [HQ-SAM](https://github.com/SysCV/sam-hq) model to achieve high-quality mask predictions. - Support the [PersonAttribute](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/en/PULC/PULC_person_attribute_en.md) and [VehicleAttribute](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/en/PULC/PULC_vehicle_attribute_en.md) model for multi-label classification task. - Introducing a new multi-label attribute annotation functionality. - Release version [1.1.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v1.1.0). - Support pose estimation: [YOLOv8-Pose](https://github.com/ultralytics/ultralytics). - Support object-level tag with yolov5_ram. - Add a new feature enabling batch labeling for arbitrary unknown categories based on Grounding-DINO. - Oct. 2023: - Release version [1.0.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v1.0.0). - Add a new feature for rotation box. - Support [YOLOv5-OBB](https://github.com/hukaixuan19970627/yolov5_obb) with [DroneVehicle](https://github.com/VisDrone/DroneVehicle) and [DOTA](https://captain-whu.github.io/DOTA/index.html)-v1.0/v1.5/v2.0 model. - SOTA Zero-Shot Object Detection - [GroundingDINO](https://github.com/wenyi5608/GroundingDINO) is released. - SOTA Image Tagging Model - [Recognize Anything](https://github.com/xinyu1205/Tag2Text) is released. - Support YOLOv5-SAM and YOLOv8-EfficientViT_SAM union task. - Support YOLOv5 and YOLOv8 segmentation task. - Release [Gold-YOLO](https://github.com/huawei-noah/Efficient-Computing/tree/master/Detection/Gold-YOLO) and [DAMO-YOLO](https://github.com/tinyvision/DAMO-YOLO) models. - Release MOT algorithms: [OC_Sort](https://github.com/noahcao/OC_SORT) (**CVPR'23**). - Add a new feature for small object detection using [SAHI](https://github.com/obss/sahi). - Sep. 2023: - Release version [0.
100+评论
captcha