|
|
e22617f72e
|
feat: Add Ascend NPU attention backend for vLLM using FlashAttention operators.
|
2026-02-10 22:15:26 +08:00 |
|
|
|
4ca9d52cf2
|
feat: Add Ascend NPU attention backend with NPU-specific FlashAttention, LayerNorm, and Rotary Embedding implementations.
|
2026-02-10 21:56:45 +08:00 |
|
|
|
3aebca03d9
|
feat: Add Ascend NPU attention backend utilizing torch_npu FlashAttention and KV cache operations.
|
2026-02-10 21:26:42 +08:00 |
|
|
|
71fdf46880
|
fix: use additive float mask (-inf) for npu_fusion_attention to resolve garbage output
|
2026-02-10 21:16:03 +08:00 |
|
|
|
f54533fba7
|
fix: use 4D mask (1, 1, S, S) for BSND layout in npu_fusion_attention
|
2026-02-10 20:57:52 +08:00 |
|
|
|
37af1ddc1f
|
fix: use npu_fusion_attention loop (BSND) for prefill_no_cache to fix crash
|
2026-02-10 20:42:47 +08:00 |
|
|
|
5337842e92
|
fix: pure pytorch reshape_and_cache + _npu_flash_attention prefill
|
2026-02-10 20:33:14 +08:00 |
|
|
|
30cf7ccd1f
|
fix: revert to _npu_reshape_and_cache (contiguous) and _npu_flash_attention
|
2026-02-10 20:29:18 +08:00 |
|
|
|
a58c3fe973
|
fix: correct layout for npu_incre_flash_attention (BNSD requires B,H,1,D)
|
2026-02-10 20:23:03 +08:00 |
|
|
|
e7655a0745
|
fix: proper PrefillNoCache detection, fallback to npu_fusion_attention for chunked prefill (CANN compat)
|
2026-02-10 20:14:42 +08:00 |
|
|
|
810a2ef757
|
refactor: align attention with Huawei vllm-ascend - reshape_and_cache with kv_cache[0]/[1], _get_fia_params, npu_fused_infer_attention_score for chunked prefill, add actual_seq_lengths_q
|
2026-02-10 20:06:52 +08:00 |
|
|
|
b8b4516b98
|
fix: replace ATB reshape_and_cache with pure PyTorch indexing
|
2026-02-10 19:56:47 +08:00 |
|
|
|
7120cd803b
|
fix: KV cache shape needs leading 2 dim for key+value pair
|
2026-02-10 19:27:10 +08:00 |
|
|
|
a274fd82ad
|
fix: accept cache_dtype_str in get_kv_cache_shape
|
2026-02-10 19:23:20 +08:00 |
|
|
|
e75504df72
|
feat: initial vllm-npu-plugin for Ascend NPU adaptation
- NPUPlatform: device management, HCCL process group, config adaptation
- AscendAttentionBackend: npu_fusion_attention (prefill) + npu_incre_flash_attention (decode)
- NPUCommunicator: HCCL-based distributed communication
- NPUWorker: NPU device init, memory profiling
- Custom ops: SiluAndMul, RMS norm, rotary embedding
- Plugin registered via vllm.platform_plugins entry point
Based on vllm-ascend official pattern, targeting Ascend 910B
|
2026-02-10 11:06:01 +08:00 |
|