Commit Graph

  • 5534f2fa41 fix: 添加对C++操作不可用时的弱引用张量回退处理 main handsomezhuzhu 2026-02-11 00:17:14 +08:00
  • 43a2ed2f47 fix: 在模块加载时进行vllm_npu_C的导入检查,以避免torch.compile/dynamo跟踪失败 handsomezhuzhu 2026-02-11 00:14:35 +08:00
  • c00c47a5b2 feat: 添加环境安装指南、验证与基准测试文档 handsomezhuzhu 2026-02-11 00:13:26 +08:00
  • ae10ce68f0 fix: Defer SOC version initialization to avoid NPU lazy init issues handsomezhuzhu 2026-02-10 23:19:34 +08:00
  • f49538ea8d fix: Update MLAAttention import logic for vllm version compatibility handsomezhuzhu 2026-02-10 23:17:30 +08:00
  • 5df056dd17 fix: Improve vllm version parsing and assume compatibility for dev versions handsomezhuzhu 2026-02-10 23:14:54 +08:00
  • c63f4439c5 feat: Improve SOC version detection and sleep mode handling in utils.py handsomezhuzhu 2026-02-10 23:12:40 +08:00
  • 6680585975 大改 handsomezhuzhu 2026-02-10 23:08:39 +08:00
  • 1baa36026c feat: Add vLLM NPU offline inference demo script. handsomezhuzhu 2026-02-10 22:19:41 +08:00
  • e22617f72e feat: Add Ascend NPU attention backend for vLLM using FlashAttention operators. handsomezhuzhu 2026-02-10 22:15:26 +08:00
  • 5bef2da1f1 feat: Implement the NPU platform plugin for vLLM, including platform registration, device management, custom operations, and configuration adaptation. handsomezhuzhu 2026-02-10 22:05:06 +08:00
  • 4ca9d52cf2 feat: Add Ascend NPU attention backend with NPU-specific FlashAttention, LayerNorm, and Rotary Embedding implementations. handsomezhuzhu 2026-02-10 21:56:45 +08:00
  • 3aebca03d9 feat: Add Ascend NPU attention backend utilizing torch_npu FlashAttention and KV cache operations. handsomezhuzhu 2026-02-10 21:26:42 +08:00
  • 71fdf46880 fix: use additive float mask (-inf) for npu_fusion_attention to resolve garbage output handsomezhuzhu 2026-02-10 21:16:03 +08:00
  • f54533fba7 fix: use 4D mask (1, 1, S, S) for BSND layout in npu_fusion_attention handsomezhuzhu 2026-02-10 20:57:52 +08:00
  • 37af1ddc1f fix: use npu_fusion_attention loop (BSND) for prefill_no_cache to fix crash handsomezhuzhu 2026-02-10 20:42:47 +08:00
  • 5337842e92 fix: pure pytorch reshape_and_cache + _npu_flash_attention prefill handsomezhuzhu 2026-02-10 20:33:14 +08:00
  • 30cf7ccd1f fix: revert to _npu_reshape_and_cache (contiguous) and _npu_flash_attention handsomezhuzhu 2026-02-10 20:29:18 +08:00
  • a58c3fe973 fix: correct layout for npu_incre_flash_attention (BNSD requires B,H,1,D) handsomezhuzhu 2026-02-10 20:23:03 +08:00
  • e7655a0745 fix: proper PrefillNoCache detection, fallback to npu_fusion_attention for chunked prefill (CANN compat) handsomezhuzhu 2026-02-10 20:14:42 +08:00
  • 810a2ef757 refactor: align attention with Huawei vllm-ascend - reshape_and_cache with kv_cache[0]/[1], _get_fia_params, npu_fused_infer_attention_score for chunked prefill, add actual_seq_lengths_q handsomezhuzhu 2026-02-10 20:06:52 +08:00
  • b8b4516b98 fix: replace ATB reshape_and_cache with pure PyTorch indexing handsomezhuzhu 2026-02-10 19:56:47 +08:00
  • 101435817a fix: add initialize_cache method to NPU worker handsomezhuzhu 2026-02-10 19:42:32 +08:00
  • 7120cd803b fix: KV cache shape needs leading 2 dim for key+value pair handsomezhuzhu 2026-02-10 19:27:10 +08:00
  • a274fd82ad fix: accept cache_dtype_str in get_kv_cache_shape handsomezhuzhu 2026-02-10 19:23:20 +08:00
  • c3631d65c2 fix: initialize TP/PP parallel groups after distributed environment handsomezhuzhu 2026-02-10 19:14:29 +08:00
  • 693e0a1d89 feat: add CUDA-to-NPU monkey patches for GPUModelRunner compatibility handsomezhuzhu 2026-02-10 19:09:14 +08:00
  • 0765fc9fd3 fix: pass world_size int to init_distributed_environment instead of vllm_config handsomezhuzhu 2026-02-10 18:58:21 +08:00
  • e75504df72 feat: initial vllm-npu-plugin for Ascend NPU adaptation handsomezhuzhu 2026-02-10 11:06:01 +08:00