This website requires JavaScript.
Explore
Help
Register
Sign In
zzh
/
vllm-npu-plugin
Watch
1
Star
0
Fork
0
You've already forked vllm-npu-plugin
mirror of
https://github.com/handsomezhuzhu/vllm-npu-plugin.git
synced
2026-02-20 11:42:30 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
3aebca03d94e7793f19603792266a468f398e276
vllm-npu-plugin
/
vllm_npu
History
handsomezhuzhu
3aebca03d9
feat: Add Ascend NPU attention backend utilizing torch_npu FlashAttention and KV cache operations.
2026-02-10 21:26:42 +08:00
..
attention
feat: Add Ascend NPU attention backend utilizing torch_npu FlashAttention and KV cache operations.
2026-02-10 21:26:42 +08:00
distributed
feat: initial vllm-npu-plugin for Ascend NPU adaptation
2026-02-10 11:06:01 +08:00
ops
feat: initial vllm-npu-plugin for Ascend NPU adaptation
2026-02-10 11:06:01 +08:00
worker
fix: add initialize_cache method to NPU worker
2026-02-10 19:42:32 +08:00
__init__.py
feat: add CUDA-to-NPU monkey patches for GPUModelRunner compatibility
2026-02-10 19:09:14 +08:00
cuda_compat.py
feat: add CUDA-to-NPU monkey patches for GPUModelRunner compatibility
2026-02-10 19:09:14 +08:00
platform.py
feat: initial vllm-npu-plugin for Ascend NPU adaptation
2026-02-10 11:06:01 +08:00