mirror of
https://github.com/handsomezhuzhu/vllm-npu-plugin.git
synced 2026-02-20 11:42:30 +00:00
feat: 添加环境安装指南、验证与基准测试文档
This commit is contained in:
175
INSTALL.md
Normal file
175
INSTALL.md
Normal file
@@ -0,0 +1,175 @@
|
||||
# vllm-npu-plugin 环境安装指南
|
||||
|
||||
## 一、环境概览
|
||||
|
||||
| 组件 | 版本要求 | 说明 |
|
||||
|------|---------|------|
|
||||
| **OS** | Linux aarch64 | 昇腾 NPU 服务器 |
|
||||
| **Python** | ≥ 3.9(推荐 3.11) | 当前验证版本 3.11.10 |
|
||||
| **NPU 驱动** | Ascend HDK 驱动 | 需提前安装好 |
|
||||
| **CANN Toolkit** | **8.3.RC2** | 昇腾计算架构核心套件 |
|
||||
| **NNAL (ATB)** | **8.3.RC2** | 神经网络加速库,**必须与 CANN 同版本** |
|
||||
| **PyTorch** | 2.7.1 | CPU 版本即可(torch_npu 提供 NPU 支持) |
|
||||
| **torch_npu** | 2.7.1 | 华为 NPU 版 PyTorch 后端 |
|
||||
| **vLLM** | 0.11.0 分支 | 自定义分支 `feat/ascend-npu-adapt-v0.11.0` |
|
||||
|
||||
> **核心原则:CANN、NNAL、torch_npu 三者的大版本必须匹配。**
|
||||
> 例如 CANN 8.3.RC2 + NNAL 8.3.RC2 + 对应 torch_npu。版本不匹配会导致 ATB 算子(如 LinearOperation)初始化失败。
|
||||
|
||||
---
|
||||
|
||||
## 二、安装步骤
|
||||
|
||||
### Step 1:安装 CANN Toolkit 8.3.RC2
|
||||
|
||||
```bash
|
||||
# 下载(从昇腾社区 https://www.hiascend.com/software/cann )
|
||||
wget <CANN_8.3.RC2_URL> -O Ascend-cann-toolkit_8.3.RC2_linux-aarch64.run
|
||||
|
||||
# 安装
|
||||
chmod +x Ascend-cann-toolkit_8.3.RC2_linux-aarch64.run
|
||||
./Ascend-cann-toolkit_8.3.RC2_linux-aarch64.run --install
|
||||
|
||||
# 设置环境变量
|
||||
source /usr/local/Ascend/ascend-toolkit/set_env.sh
|
||||
```
|
||||
|
||||
### Step 2:安装 NNAL 8.3.RC2(关键!)
|
||||
|
||||
```bash
|
||||
# 下载(从昇腾社区 https://www.hiascend.com/software/nnal )
|
||||
wget <NNAL_8.3.RC2_URL> -O Ascend-cann-nnal_8.3.RC2_linux-aarch64.run
|
||||
|
||||
# 安装
|
||||
chmod +x Ascend-cann-nnal_8.3.RC2_linux-aarch64.run
|
||||
./Ascend-cann-nnal_8.3.RC2_linux-aarch64.run --install
|
||||
|
||||
# 设置环境变量(大模型场景用 atb)
|
||||
source /usr/local/Ascend/nnal/atb/set_env.sh
|
||||
```
|
||||
|
||||
> ⚠️ **常见坑**:如果 CANN 升级了但 NNAL 没有升级,会导致 `LinearOperation setup failed!` 错误。
|
||||
> 验证方法:`ls /usr/local/Ascend/nnal/atb/` 查看版本目录,确保与 CANN 一致。
|
||||
|
||||
### Step 3:安装 PyTorch + torch_npu
|
||||
|
||||
```bash
|
||||
# 安装 PyTorch 2.7.1
|
||||
pip install torch==2.7.1
|
||||
|
||||
# 安装 torch_npu(必须与 PyTorch 版本对应)
|
||||
pip install torch_npu==2.7.1
|
||||
```
|
||||
|
||||
### Step 4:安装 vLLM(自定义分支)
|
||||
|
||||
```bash
|
||||
cd /workspace/mnt/vllm_ascend/vllm
|
||||
git checkout feat/ascend-npu-adapt-v0.11.0
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
### Step 5:安装 vllm-npu-plugin
|
||||
|
||||
```bash
|
||||
cd /workspace/mnt/vllm_ascend/vllm-npu-plugin
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 三、环境变量配置
|
||||
|
||||
在 `~/.bashrc` 或启动脚本中添加:
|
||||
|
||||
```bash
|
||||
# CANN Toolkit
|
||||
source /usr/local/Ascend/ascend-toolkit/set_env.sh
|
||||
|
||||
# NNAL / ATB(大模型场景)
|
||||
source /usr/local/Ascend/nnal/atb/set_env.sh
|
||||
|
||||
# 指定可见 NPU 设备(单卡示例)
|
||||
export ASCEND_VISIBLE_DEVICES=0
|
||||
|
||||
# ATB 性能调优(可选,已在容器中默认设置)
|
||||
export ATB_OPERATION_EXECUTE_ASYNC=1
|
||||
export ATB_CONTEXT_HOSTTILING_RING=1
|
||||
export ATB_CONTEXT_HOSTTILING_SIZE=102400
|
||||
export ATB_OPSRUNNER_KERNEL_CACHE_LOCAL_COUNT=1
|
||||
export ATB_OPSRUNNER_KERNEL_CACHE_GLOABL_COUNT=16
|
||||
export ATB_WORKSPACE_MEM_ALLOC_GLOBAL=1
|
||||
export ATB_USE_TILING_COPY_STREAM=0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 四、安装验证
|
||||
|
||||
```bash
|
||||
# 1. 验证 CANN 版本
|
||||
cat /usr/local/Ascend/ascend-toolkit/latest/version.cfg
|
||||
|
||||
# 2. 验证 NNAL/ATB 版本(应该只有当前版本)
|
||||
ls /usr/local/Ascend/nnal/atb/
|
||||
|
||||
# 3. 验证 torch_npu
|
||||
python -c "import torch_npu; print(torch_npu.__version__)"
|
||||
|
||||
# 4. 验证 NPU 可用
|
||||
python -c "
|
||||
import torch
|
||||
import torch_npu
|
||||
print('NPU available:', torch.npu.is_available())
|
||||
print('NPU count:', torch.npu.device_count())
|
||||
print('NPU name:', torch.npu.get_device_name(0))
|
||||
"
|
||||
|
||||
# 5. 验证 ATB 算子(关键!)
|
||||
python -c "
|
||||
import torch
|
||||
import torch_npu
|
||||
x = torch.randn(2, 4, dtype=torch.float16).npu()
|
||||
w = torch.randn(4, 4, dtype=torch.float16).npu()
|
||||
c = torch.zeros(2, 4, dtype=torch.float16).npu()
|
||||
torch_npu._npu_matmul_add_fp32(x, w, c)
|
||||
print('ATB matmul OK')
|
||||
"
|
||||
|
||||
# 6. 验证插件加载
|
||||
python -c "
|
||||
import vllm
|
||||
from vllm.platforms import current_platform
|
||||
print('Platform:', current_platform.device_name)
|
||||
"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 五、快速启动
|
||||
|
||||
```bash
|
||||
# 离线推理测试
|
||||
cd /workspace/mnt/vllm_ascend/vllm-npu-plugin
|
||||
python demo.py
|
||||
|
||||
# OpenAI 兼容 API 服务
|
||||
python -m vllm.entrypoints.openai.api_server \
|
||||
--model /workspace/mnt/vllm_ascend/Qwen2.5-7B-Instruct \
|
||||
--dtype float16 \
|
||||
--trust-remote-code \
|
||||
--host 0.0.0.0 \
|
||||
--port 8000
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 六、常见问题
|
||||
|
||||
| 错误 | 原因 | 解决 |
|
||||
|------|------|------|
|
||||
| `LinearOperation setup failed!` | NNAL/ATB 版本与 CANN 不匹配 | 升级 NNAL 到与 CANN 同版本 |
|
||||
| `ReshapeAndCacheOperation setup failed!` | 同上 | 同上 |
|
||||
| `Cannot re-initialize NPU in forked subprocess` | NPU 在 fork 前被初始化 | 插件代码已修复(延迟初始化) |
|
||||
| `vllm._C not found` | 正常警告,vLLM 的 CUDA C++ 扩展在 NPU 环境不需要 | 忽略 |
|
||||
| `vllm_npu_C not found` | 正常警告,需编译 C++ 扩展才有 | 忽略,不影响功能 |
|
||||
Reference in New Issue
Block a user