* hexagon: restore HTP_OPMASK_QUEUE
* hexagon: honor OPMASK_SKIP_COMPUTE in hmx-matmul
* hex-prof: restore op profiling
* hex-prof: enable PMU
* hexagon: simplify and improve op-queuing with full profiling support
Add separate profile descriptors.
* hexagon: remove opsync and rename opmask into opstage
opsync is no longer needed since the profiler is fully async now.
opmask name was confusing and opstage is more accurate.
* hexagon: refactor opbatch queue handling
* hexagon: add iface hooks for enabling profiler from the host
Also move all the PMU setup stuff out of the hex-utils since it's not inteded for normal use.
* hexagon: make profiler mode configurable
On older devices getting PMU counters is expensive so it's now optional.
* hexagon: add support for setting profiler pmu events from env
* hexagon: simplify profiler output (no need to print buffs, etc)
* hexagon: simplify pmu counter formating
* hexagon: add a simple profile post-proc tool
* hex-prof: add support for reading logs from stdin
* hexagon: document GGML_HEXAGON_PROFILE
* hex-prof: update default width for dims field
* hex-prof: fix linter warnings and errors
* Update ggml/src/ggml-hexagon/htp/htp-ops.h
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update scripts/snapdragon/ggml-hexagon-profile.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Trivikram Reddy <tamarnat@qti.qualcomm.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Fixes#22237 — the find_library(MATH_LIBRARY m) result was being
discarded and the target linked against the literal 'm' string.
This prevents users from overriding the math library (e.g. for AMD AOCL)
via CMake variables. Now the discovered MATH_LIBRARY is used directly.
* sycl : fused MoE mul_mat_vec_q for TG
Create an MMVQ kernel so ggml_sycl_mul_mat_id can consolidate
n_experts_used matmuls in a single kernel launch. The kernel
also reads expert IDs directly, removing a per-call host sync.
This is similar to the CUDA backend's ggml_cuda_mul_mat_vec_q*
paths.
All types supported in the current MMVQ are supported here as well:
Q2_K, Q3_K, Q4_K, Q5_K, Q6_K, Q4_0, Q4_1, Q5_0, Q5_1, Q8_0
It will fall back to the existing per-expert path when src0 has been rewritten
by opt_for_reorder(), and for any shape the fused path doesn't handle.
test-backend-ops passes for supported type/shape combos.
Benchmark: Qwen3-Next-35B-A3B Q4_K_M on Intel Arc B70 (SYCL0),
baseline 707c0b7a6, 16k context, -fa 0.
build/bin/llama-bench -hf unsloth/Qwen3.5-35B-A3B-GGUF:Q4_K_M \
-p 1024 -n 128 -d 16384 -ngl 99 -fa 0 -ub 2048 -r 2 -dev SYCL0
Before (3 runs on 707c0b7a6):
| test | run 1 | run 2 | run 3 |
| --------------- | ----------------:| ----------------:| ----------------:|
| pp1024 @ d16384 | 533.26 ± 4.87 | 535.20 ± 2.78 | 524.27 ± 3.10 |
| tg128 @ d16384 | 33.47 ± 0.02 | 33.31 ± 0.02 | 33.17 ± 0.05 |
After (3 runs on 707c0b7a6 + this patch):
| test | run 1 | run 2 | run 3 |
| --------------- | ----------------:| ----------------:| ----------------:|
| pp1024 @ d16384 | 534.06 ± 0.97 | 531.95 ± 0.02 | 520.94 ± 20.10 |
| tg128 @ d16384 | 45.85 ± 0.21 | 45.95 ± 0.45 | 46.22 ± 0.12 |
disclosure: Claude wrote it, but I reviewed and understand the implementation
(albeit my C is a little rusty).
* sycl: also support nvfp4 and mxfp4 expert types
* sycl: terser comments/nested dispatch in response to review
* sycl: more comment cleanup in mmvq.cpp/hpp
---------
Co-authored-by: Debian <aaron@openllmi.net.bots.is>
* shader(im2col): implement the im2col shader
* shader(im2col): clean the formatting issues
* shader(im2col): clean the editorconfig checker warning
* fix(shader): address the workgroup issues of im2col and conv2d
* Only run webgpu CI on my fork
* Implement set_tensor_async
* Implement synchronize api
* Implement event creation and deletion API
* Cleanup
* Cleanup
* Comment out jobs for local CI run
* Add webgpu only workflow
* Delete .github/workflows/build-webgpu.yml
* Cleanup
* Cleanup
* Update API with function handlers
* Run clang-format
* Replace one-shot buffer with a direct queue.WriteBuffer using the buffer context
* fused rms_norm_mul + mul
* Add GGML_WEBGPU_DISABLE_FUSION for being able to disable kernel fusion.
* Decouple num_fused_ops from webgpu_context; misc cleanup
* Fix eps handling and remove disable_fusion.
* Fix not to use c++20 initializers.
* sycl: size mul_mat_id staging buffers by routed rows
Previously src1_contiguous/dst_contiguous in ggml_sycl_mul_mat_id were
sized to ggml_nelements(src1/dst), which over-allocates when ne12 > 1
and can fail with UR_RESULT_ERROR_OUT_OF_HOST_MEMORY on Level Zero for
MoE models (notably with --cpu-moe). Size them by the actual number of
routed rows (ids->ne[1] * n_ids) instead.
* sycl: add bf16 mul_mat fast path via DNNL
When src0 is BF16 (commonly the case for lm_head / output.weight), the
existing f16 path is skipped because bf16 isn't covered, and the f32
fallback dequantizes the entire src0 slab to f32 in a single pool alloc
(row_diff*ne00 floats). For large-vocab models this can reach several
GB and fail with UR_RESULT_ERROR_OUT_OF_HOST_MEMORY on Level Zero.
Add a bf16xbf16 -> f32 DNNL matmul fast path that uses the bf16 storage
in place and only materializes a small src1 bf16 conversion buffer. bf16
matmul accumulates in f32, so it's correct even when the op requests
GGML_PREC_F32 (as lm_head does).
- gemm.hpp: map bfloat16 to dnnl::memory::data_type::bf16.
- convert.{hpp,cpp}: expose ggml_get_to_bf16_sycl for f32/f16/bf16 -> bf16.
- ggml-sycl.cpp: take the bf16 path early in ggml_sycl_op_mul_mat_sycl
when DNNL and GGML_SYCL_HAS_BF16 are both available.
* ggml(webgpu): fix the busy-polls in Emscripten in the waitAny after #20618, and remove the busy webgpu log
* Merge with upstream
* Fix GET_ROWS packed integer NaN when using f16 as memory buffer in shader quants
* Update Unary wgsl EXP and EXPM1 for f16 stability
* Fix GET_ROWS IQ4_XS strcut for NaN f16 canonicalization
* Fix numerical percision for unary sqrt when working with f16
* Fix NaN canonicalization for packed integers using f16
* Update err threshold for binary div ops when using f16
* backend: Keep one Dawn/WebGPU instance alive for the lifetime of the static backend
* clean: uncomment existing code logs
* clean: clean the unncessary debug info
* Refactor and generalize dequant helpers
* Remove deprecated quant structs
* Refactor shader defines to reduce repetition
* Remove error override for F16 type
* fix: fix the accidential removal of the proper initialization of ctx
* clean: clean legacy and format code
* fix: did not modify tests ops
* shader(conv2d): add conv2d shader kernels and pass f32 and f16 tests
* shader(conv2d): fix the out of bounds memory access in the weight indexing
* shader(conv2d): clean unused variables and optimize the computation
* merge: use the new entries function
* clean: address the formatting issues
* clean: address the warning issues
* clear: clean the shader editorconfig-checker issues
* clear: clean the shader editorconfig-checker with utf-8
---------
Co-authored-by: Jeremy J. Hartmann <jeremy@mtion.tv>
* Thread safety per request only
* Fix ROPE yarn case
* Fix sticky stateful config
* Use i4/i8 directly for symmetric quant
* Use weightless caching
* Add WeightlessCacheAttribute to reduce NPU memory usage
* Gelu tanh support (#125)
* Imrope support (#126)
* fix(openvino): explicit ov::Tensor frees in ggml_backend_openvino_free
* add GPU,NPU support in OV Dockerfile
* add build-openvino.yml ci
* Fix sticky stateful config
* add concurrency to ov-gpu ci runs. Move OV CI to build-openvino.yml
* fix thread-safety of shared runtime context
* rope type abstraction for frontend translations
* fix editorconfig
---------
Co-authored-by: Mustafa Cavus <mustafa.cavus@intel.com>
Co-authored-by: Dan Hoffman <dhoff749@gmail.com>
Co-authored-by: Ravi Panchumarthy <ravi.panchumarthy@intel.com>
* Fix delayed AllReduce on Gemma-4 MoE
Skip forward past nodes that don't consume the current one, and allow a chain of MULs.
* Check for all sources before skipping nodes
* Address review comments
* Implemented optimized q1_0 dot for x86 and generic
* Removed redundant helper definition
* Removed two redundant instructions from AVX q1_0 dot
* Fixed inconsistency with fp16 conversion for generic q1_0 dot and deduplicated generic fallback
* Style cleanup around AVX q1_0 dot
* Replaced explicitly unrolled blocks with inner for loop for q1_0
* Replaced scalar ARM q1_0 impl with new generic one
* merged properly, but slow q3_k and q5_k with u32 indexing
* Start on new mat-vec
* New format float paths working
* Working q4_0
* Work on remaining legacy q-types
* port k-quants to new matvec
* remove old shader
* Remove old constants, format
* remove accidental file
---------
Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>
* [SYCL] Fix reorder MMVQ assert on unaligned vocab sizes
The reorder mul_mat_vec_q dispatchers for Q4_0, Q8_0, Q4_K, and Q6_K
asserted that block_num_y was a multiple of 16 subgroups. Models with
a vocab size not divisible by 16 (for example HY-MT at 120818) aborted
on model load when the output projection tripped the assert.
I replaced the assert with padding: block_num_y now rounds up to a
whole number of subgroup-sized workgroups. The kernel already has the
row bounds check (`if (row >= nrows) return;`) so the extra padded
threads early-exit cleanly. Row values are uniform across a subgroup
so the collective reduce stays safe.
For aligned vocab sizes the padded block_num_y equals the old value,
so the kernel launch is identical and there is no regression.
Thanks to @arthw for flagging the relationship to #21527.
Fixes#22020.
AI assisted coding, tested on Intel B70 hardware.
* sycl: use WARP_SIZE for num_subgroups in reorder MMVQ launches
Replaces the hardcoded 16 with WARP_SIZE in the four reorder_mul_mat_vec
launch helpers (Q4_0, Q8_0, Q4_K, Q6_K). Compile-time no-op on the Intel
target where WARP_SIZE is 16, but makes the relationship to subgroup
size explicit. Per review by @NeoZhangJianyu on #22035.
Assisted by Claude.
* cache subgraph splits when cgraph is unchanged
Skip per-call subgraph construction in ggml_backend_meta_graph_compute when the same ggml_cgraph is used consecutively.
Assign uid to every sub-graph so that CUDA's fast uid check path hits too.
* Address review comments
* Keep the scope as is
* Rename last_uid and last_n_subgraphs field. Remove last_max_tmp_size field. Refactor code.
* Address review comments
* Update ggml/src/ggml-backend-meta.cpp
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* Update ggml/src/ggml-backend-meta.cpp
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* rpc : refactor the RPC transport
Move all transport related code into a separate file and use the
socket_t interface to hide all transport implementation details.
* fix win32
* better socket_t construction
* Update workflows to remove dependence on llvmpipe
* Try setting Dawn_DIR
* remove c++20 initializers
* Move to proper guid
* Try avoiding segfaults on vulkan backend process exit
* Remove compiler warnings on parameter casting
* Fix soft_max and update reg_tile accumulation to f32 for better precision
* Refactor flash_attn a bit
* remove c++20 initializers and format
* Increase div precision for NVIDIA
* revert div precision and comment out ggml-ci node for now
* Formatting
* Try debugging on a failing CI node
* Revert "Try debugging on a failing CI node"
This reverts commit 1971e33cba919915e12bcfd5828abfbd54ca942e.
* optimize hmx_mat_mul functions by calculating row and column tiles upfront
* refactor core_dot_chunk_fp16 to use size_t for tile counts and improve readability
* wip
* set scale outside of loop
* wip
* refactor core_mma_chunk_fp16 and mat_mul_qk_0_d16a32 to use size_t for tile counts
* wip
* wip
* refactor transfer_output_chunk_fp16_to_fp32 to use size_t for dimensions
* refactor core_dot_chunk_fp16 to use size_t for tile row stride calculation
* wip
* refactor hmx_mat_mul functions to use hvx_vec_splat_f16 for column scales initialization
* refactor hmx_mat_mul_permuted_w16a32_batched to streamline scale setting and locking
* refactor core_dot_chunk_fp16 to improve tile stride calculations for output
* refactor hmx_mat_mul functions to use Q6_V_vsplat_R for column scales initialization
* fix compiling error
* wip
* optimize row and column tile indexing in core_mma_chunk_fp16 function
* wip
* Revert "wip"
This reverts commit cde679eff79c4a28dd2d89d32f710015e09592b6.
* Add size limit check for HAP_mmap in htp_iface_mmap and drop_mmap functions
* wip
* ggml: add graph_reused
* use versioning instead of reuse flag
* increment version with atomic
* use top bits for split numbering
* add assert
* move counter to ggml.c
* set uid in split_graph only
* fix windows
* address further review comments
* get next_uid rather than doing bit manipulation
* rename + add comment about uid
* nix: support unified apple-sdk
* Impl roll op for Metal
* Revert "nix: support unified apple-sdk"
This reverts commit abfa473360471532c547de8b202c780507924d4b.
* update ops.md
* update op docs
* Update register tiling matmul to use f32 accumulation
* fix profiling code
* Fix register tiling matmul for chrome, i'm blaming dawn
* Update batch tuning value for iOS
* compile fix
* Fix use of new load function
* Move to a single query set for GPU profiling
* Move to batching compute passes when not profiling
* Refactor build_multi
* remove iOS throttling now that we're batching compute passes
* [SYCL] Fix Q8_0 reorder: add missing dequantize path for GEMM
The Q8_0 reorder optimization (#21527) was missing a reorder-aware
dequantizer for the GEMM code path used during prompt processing.
After token generation reordered Q8_0 weights (via DMMV/MMVQ), the
next prompt processing pass would read them with the standard
dequantizer, producing garbage output.
Add dequantize_block_q8_0_reorder() and wire it into both
ggml_get_to_fp16_sycl() and ggml_get_to_fp32_sycl(), matching the
pattern already used by Q4_0, Q4_K, and Q6_K.
Fixes#21589
AI (Claude) was used to assist with root cause investigation and
writing the kernel code. All code was human-reviewed and tested
on real hardware.
* SYCL: fix reorder crash when device memory is full
The reorder optimization allocates a temporary buffer the full size of
the weight tensor on the device. When VRAM is nearly full (large models
on a single GPU), this allocation fails and the subsequent memcpy crashes
on a NULL pointer.
Fix: try device allocation first, fall back to host memory if device
memory is full. The reorder kernel still works correctly reading from
host memory over PCIe. This is slower for the one-time reorder (~21 t/s
vs ~38 t/s on Intel Arc Pro B70), but the optimization is preserved for
all subsequent inference. If both device and host allocation fail, skip
the reorder and fall back to the unoptimized kernel path.
Also fixes a bug where opt_for_reorder() marked tensors as reordered
even when the reorder was skipped due to allocation failure. This caused
DMMV/MMVQ kernels to read the original AoS data as if it were SoA,
producing garbage output or NaN results.
Tested on Intel Arc Pro B70 (32GB) with Q8_0, Q4_K_M models. Coding was
AI-assisted (Claude), reviewed and tested on hardware by a human.
Fixes#20478
* SYCL: add RAII temp buffer class + macro guard for host fallback
Replace sycl_ext_malloc_with_fallback/sycl_ext_free_fallback free
functions with sycl_reorder_temp_buffer RAII class. The host_fallback
bool is now a private member, and cleanup happens automatically at
scope exit.
Add GGML_SYCL_HOST_MEM_FALLBACK cmake option (default ON) to guard
the host memory fallback code path. Device access to host memory
requires Linux kernel 6.8+ (Ubuntu 26.04+); users on older kernels
can set -DGGML_SYCL_HOST_MEM_FALLBACK=OFF to disable it.
Addresses arthw's review on PR #21638.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* SYCL: document GGML_SYCL_HOST_MEM_FALLBACK build option in SYCL.md
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* SYCL: add reorder-aware DMMV dequantizers for Q4_K and Q6_K
Q4_K and Q6_K had reorder support for MMVQ and GEMM paths but not
DMMV. When the DMMV path encountered reordered data it would abort.
Add DMMV kernels that read from the SOA reorder layout for both
types. Same math as the non-reorder versions, different memory
access pattern.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* CUDA: manage NCCL communicators in context
* add check that all backends are CUDA
* remove unused vector, limit init to > 1 GPUs
* fix warnings
* fix cuda device, cache allreduce
* hexagon: add async HMX worker
Introduce hmx-worker (dedicated thread for HMX compute) to overlap HMX
matmul with HVX dequant/DMA stages in the pipeline path, replacing the
previous synchronous HMX calls that blocked the main thread.
* hexagon: cost-based VTCM chunk search for out-stationary matmul
* hexagon: fix futex race in hmx_worker_drain
Store the boolean to local variable avoid atomic load twice
* hex-mm: hmx optimize scatter/transpose and use HMX intrinsics
* hex-vmem: drop vmem limit a touch under 3GB on v73
* hexagon: add fwd declaration of htp_context
* hex-hmx: replace hmx-worker with hmx-queue that mimics dma-queue interface
Simplifies the overall implemantion, reduces thread wakeup roundtrips.
* hex-mm: add debug log to hmx work func called from hmx-queue
* Update hmx-queue.h
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
---------
Co-authored-by: Kim-Chyan Gan <kgan@qti.qualcomm.com>
Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>