This website requires JavaScript.
Explore
Help
Register
Sign In
zzh
/
vllm-npu-plugin
Watch
1
Star
0
Fork
0
You've already forked vllm-npu-plugin
mirror of
https://github.com/handsomezhuzhu/vllm-npu-plugin.git
synced
2026-02-20 11:42:30 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
5bef2da1f1e22b19ea6fde4c8e4b172841e51cd6
vllm-npu-plugin
/
vllm_npu
History
handsomezhuzhu
5bef2da1f1
feat: Implement the NPU platform plugin for vLLM, including platform registration, device management, custom operations, and configuration adaptation.
2026-02-10 22:05:06 +08:00
..
attention
feat: Add Ascend NPU attention backend with NPU-specific FlashAttention, LayerNorm, and Rotary Embedding implementations.
2026-02-10 21:56:45 +08:00
distributed
feat: initial vllm-npu-plugin for Ascend NPU adaptation
2026-02-10 11:06:01 +08:00
ops
feat: Add Ascend NPU attention backend with NPU-specific FlashAttention, LayerNorm, and Rotary Embedding implementations.
2026-02-10 21:56:45 +08:00
worker
fix: add initialize_cache method to NPU worker
2026-02-10 19:42:32 +08:00
__init__.py
feat: Implement the NPU platform plugin for vLLM, including platform registration, device management, custom operations, and configuration adaptation.
2026-02-10 22:05:06 +08:00
cuda_compat.py
feat: add CUDA-to-NPU monkey patches for GPUModelRunner compatibility
2026-02-10 19:09:14 +08:00
platform.py
feat: Implement the NPU platform plugin for vLLM, including platform registration, device management, custom operations, and configuration adaptation.
2026-02-10 22:05:06 +08:00