ZIPdeepseek-v3-main.zip源代码 1.65MB

sanjina

资源文件列表:

deepseek-v3-main.zip 大约有26个文件
  1. deepseek-v3-main/
  2. deepseek-v3-main/.github/
  3. deepseek-v3-main/.github/ISSUE_TEMPLATE/
  4. deepseek-v3-main/.github/ISSUE_TEMPLATE/bug_report.md 468B
  5. deepseek-v3-main/.github/ISSUE_TEMPLATE/feature_request.md 595B
  6. deepseek-v3-main/.gitignore 3.32KB
  7. deepseek-v3-main/CITATION.cff 5.93KB
  8. deepseek-v3-main/DeepSeek_V3.pdf 1.59MB
  9. deepseek-v3-main/LICENSE-CODE 1.04KB
  10. deepseek-v3-main/LICENSE-MODEL 13.44KB
  11. deepseek-v3-main/README.md 23.41KB
  12. deepseek-v3-main/README_WEIGHTS.md 3.57KB
  13. deepseek-v3-main/figures/
  14. deepseek-v3-main/figures/benchmark.png 179.28KB
  15. deepseek-v3-main/figures/niah.png 105.93KB
  16. deepseek-v3-main/inference/
  17. deepseek-v3-main/inference/configs/
  18. deepseek-v3-main/inference/configs/config_16B.json 417B
  19. deepseek-v3-main/inference/configs/config_236B.json 455B
  20. deepseek-v3-main/inference/configs/config_671B.json 503B
  21. deepseek-v3-main/inference/convert.py 3.73KB
  22. deepseek-v3-main/inference/fp8_cast_bf16.py 4.35KB
  23. deepseek-v3-main/inference/generate.py 7.63KB
  24. deepseek-v3-main/inference/kernel.py 7.89KB
  25. deepseek-v3-main/inference/model.py 31.75KB
  26. deepseek-v3-main/inference/requirements.txt 66B

资源介绍:

为方便亲们下载,将deepseek-v3-main.zip源代码放此下载。
&lt;!-- markdownlint-disable first-line-h1 --&gt; &lt;!-- markdownlint-disable html --&gt; &lt;!-- markdownlint-disable no-duplicate-header --&gt; <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3"> </div> <hr> <div align="center" xss=removed> <a href="https://www.deepseek.com/" target="_blank" xss=removed> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" xss=removed> </a> <a href="https://chat.deepseek.com/" target="_blank" xss=removed> <img alt="Chat" src="https://img.shields.io/badge/🤖 Chat-DeepSeek V3-536af5?color=536af5&logoColor=white" xss=removed> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" xss=removed> <img alt="Hugging Face" src="https://img.shields.io/badge/🤗 Hugging Face-DeepSeek AI-ffc107?color=ffc107&logoColor=white" xss=removed> </a> </div> <div align="center" xss=removed> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" xss=removed> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek AI-7289da?logo=discord&logoColor=white&color=7289da" xss=removed> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" xss=removed> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek AI-brightgreen?logo=wechat&logoColor=white" xss=removed> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" xss=removed> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" xss=removed> </a> </div> <div align="center" xss=removed> <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-CODE" xss=removed> <img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" xss=removed> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-MODEL" xss=removed> <img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" xss=removed> </a> </div> <p align="center"> <a href="DeepSeek_V3.pdf"><b>Paper Link</b>👁️</a> </p> ## Table of Contents 1. [Introduction](#1-introduction) 2. [Model Summary](#2-model-summary) 3. [Model Downloads](#3-model-downloads) 4. [Evaluation Results](#4-evaluation-results) 5. [Chat Website & API Platform](#5-chat-website--api-platform) 6. [How to Run Locally](#6-how-to-run-locally) 7. [License](#7-license) 8. [Citation](#8-citation) 9. [Contact](#9-contact) ## 1. Introduction We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks. <p align="center"> <img width="80%" src="figures/benchmark.png"> </p> ## 2. Model Summary --- **Architecture: Innovative Load Balancing Strategy and Training Objective** - On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing. - We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance. It can also be used for speculative decoding for inference acceleration. --- **Pre-Training: Towards Ultimate Training Efficiency** - We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model. - Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap. This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead. - At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours. --- **Post-Training: Knowledge Distillation from DeepSeek-R1** - We introduce an innovative methodology to distill reasoning capabilities from the long-Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3. Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3 and notably improves its reasoning performance. Meanwhile, we also maintain a control over the output style and length of DeepSeek-V3. --- ## 3. Model Downloads <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-V3-Base | 671B | 37B | 128K | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base) | | DeepSeek-V3 | 671B | 37B | 128K | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3) | </div> > [!NOTE] > The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights. To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: [How_to Run_Locally](#6-how-to-run-locally). For developers looking to dive deeper, we recommend exploring [README_WEIGHTS.md](./README_WEIGHTS.md) for details on the Main Model weights and the Multi-Token Prediction (MTP) Modules. Please note that MTP support is currently under active development within the community, and we welcome your contributions and feedback. ## 4. Evaluation Results ### Base Model #### Standard Benchmarks <div align="center"> | | Benchmark (Metric) | # Shots | DeepSeek-V2 | Qwen2.5 72B | LLaMA3.1 405B | DeepSeek-V3 | |---|-------------------|----------|--------|-------------|---------------|---------| | | Architecture | - | MoE | Dense | Dense | MoE | | | # Activated Params | - | 21B | 72B | 405B | 37B | | | # Total Params | - | 236B | 72B | 405B | 671B | | English | Pile-test (BPB) | - | 0.606 | 0.638 | **0.542** | 0.548 | | | BBH (EM) | 3-shot | 78.8 | 79.8 | 82.9 | **87.5** | | | MMLU (Acc.) | 5-shot | 78.4 | 85.0 | 84.4 | **87
100+评论
captcha
    类型标题大小时间
    PDFMySQL Workbench使用教程.pdf126.47KB1月前
    PDF总钻风摄像头使用说明(逐飞官方)2.36MB1月前
    RARSTM32F407ZGT6标准库工程模板445.81KB1月前
    JPGASCLL码表大全(图片对照表)172.36KB1月前
    RARSTM32F10xxx参考手册(中文)10.65MB1月前
    ZIPAI数字人/数字人分身/短视频数字人营销/源码开源19.93MB1月前
    ZIP123数字人小程序源码683.34KB1月前
    PDF零基础入门多模态学习PPT4.39MB1月前