Commit Graph

9 Commits

Author SHA1 Message Date
5337842e92 fix: pure pytorch reshape_and_cache + _npu_flash_attention prefill 2026-02-10 20:33:14 +08:00
30cf7ccd1f fix: revert to _npu_reshape_and_cache (contiguous) and _npu_flash_attention 2026-02-10 20:29:18 +08:00
a58c3fe973 fix: correct layout for npu_incre_flash_attention (BNSD requires B,H,1,D) 2026-02-10 20:23:03 +08:00
e7655a0745 fix: proper PrefillNoCache detection, fallback to npu_fusion_attention for chunked prefill (CANN compat) 2026-02-10 20:14:42 +08:00
810a2ef757 refactor: align attention with Huawei vllm-ascend - reshape_and_cache with kv_cache[0]/[1], _get_fia_params, npu_fused_infer_attention_score for chunked prefill, add actual_seq_lengths_q 2026-02-10 20:06:52 +08:00
b8b4516b98 fix: replace ATB reshape_and_cache with pure PyTorch indexing 2026-02-10 19:56:47 +08:00
7120cd803b fix: KV cache shape needs leading 2 dim for key+value pair 2026-02-10 19:27:10 +08:00
a274fd82ad fix: accept cache_dtype_str in get_kv_cache_shape 2026-02-10 19:23:20 +08:00
e75504df72 feat: initial vllm-npu-plugin for Ascend NPU adaptation
- NPUPlatform: device management, HCCL process group, config adaptation
- AscendAttentionBackend: npu_fusion_attention (prefill) + npu_incre_flash_attention (decode)
- NPUCommunicator: HCCL-based distributed communication
- NPUWorker: NPU device init, memory profiling
- Custom ops: SiluAndMul, RMS norm, rotary embedding
- Plugin registered via vllm.platform_plugins entry point

Based on vllm-ascend official pattern, targeting Ascend 910B
2026-02-10 11:06:01 +08:00