* ggml : revert to -lm linking instead of find_library
`find_library(MATH_LIBRARY m)` was introduced recently, but it breaks
CUDA compilation with GGML_STATIC. I could not find any valid use case
where we would prefer `find_library` over the standard `-lm` approach.
This commit is also meant to start a discussion if there is a valid
reason to keep `find_library(MATH_LIBRARY m)`, we should clarify what
problem it was solving and find an alternative fix that does not break
CUDA with GGML_STATIC.
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
* ggml : use MATH_LIBRARY only if defined
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
* ggml : fix initial broken condition
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
* ggml : always respect MATH_LIBRARY when defined
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
---------
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
New operators:
- GGML_OP_SET: implement via aclnnInplaceCopy on target region
- GGML_OP_CUMSUM: implement via aclnnCumsum
- GGML_OP_FILL: implement via aclnnInplaceFillScalar
- GGML_OP_DIAG: implement via aclnnInplaceCopy on diagonal strides
- GGML_OP_TRI (lower/lower_diag/upper_diag/upper): implement via
aclnnTril(-1/0) and aclnnTriu(0/1) with appropriate diagonal offsets
- GGML_OP_SOLVE_TRI: implement via aclnnTriangularSolve
- GGML_UNARY_OP_SOFTPLUS: implement via aclnnSoftplus
Optimizations:
- GLU (SwiGLU/GeGLU/GeGLU_ERF/GeGLU_QUICK): fuse with aclnnSwiGlu /
aclnnGeGluV3 when applicable; fallback conditions now checked inside
each function rather than at the call site
- CROSS_ENTROPY_LOSS: replace 5-kernel sequence (LogSoftmax→Mul→
ReduceSum×2→Muls) with single aclnnSoftmaxCrossEntropyWithLogits call
- L2_NORM: fix in-place ClampMin on norm result (was clamping wrong
tensor); add eps clamping before division to avoid divide-by-zero
- PAD_REFLECT_1D: eliminate per-ne[3] loop; assert contiguity and call
ReflectionPad1d once on the full 4-D view; remove redundant nb copies
- GET_ROWS: replace IndexSelect with GatherV2 per batch slice; refactor
helper into gather_batched lambda with batch loop inlined
- SET_ROWS: replace IndexCopy with InplaceIndexCopy per batch slice;
refactor helper into scatter_batched lambda with batch loop inlined
- OUT_PROD: replace O(ne[3]*ne[2]*ne[1]) Ger+InplaceAdd loop with
per-slice Matmul loop (src0 @ src1^T); handles strided-broadcast
batch dims where ne02/ne03 may differ from ne2/ne3
- backend memset_tensor: implement via aclrtMemset (was NULL)
Bug fixes:
- COUNT_EQUAL: use non-inplace EqTensor into a same-type temporary
buffer instead of InplaceEqTensor, avoiding corruption of src0
- ACL graph cache (USE_ACL_GRAPH): restore node_type and src_type[]
fields in ggml_graph_node_properties; has_matching_properties() was
missing type checks, causing F16 and BF16 tensors (same nb[0]=2) to
incorrectly share cached graphs and produce wrong results (ERR≈679)
- graph cache op_params matching: compare full GGML_MAX_OP_PARAMS
bytes so that ops differing only in parameters are not incorrectly
replayed from cache
The previous code worked only for full tensor reads and writes and was hitting `GGML_ASSERT(size == ggml_nbytes(tensor)); ` assert when tested with llama-server.
* Optimize Metal Tensor API usage for matmul2d
Separates the Metal Tensor API (matmul2d) path in kernel_mul_mm into its own standalone kernel, gated by GGML_METAL_HAS_TENSOR.
The legacy simdgroup_matrix kernel is preserved under #else.
Previously both paths were interleaved via #ifdef blocks within a single kernel, forcing the tensor path to share the legacy kernel's data layout and threadgroup memory scheme. Splitting the kernel enabled memory and dispatch optimizations that weren't possible when the two paths shared code structure.
* cont : cleanup
* cont : cleanup
* cont : cleanup
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* opt arc770 for Q4_0
* add for Q4_0
* update the script
* add help script for windows
* update guide
* fix format issue
* convert from dos to unix for format issue
* fix missed -sm parameter
* ggml-webgpu: add tile flash attention fallback
* ggml-webgpu: add new fields and discard usage of mnk for tile version
* ggml-webgpu: modify the vec path to discard the mnk parameter
* ggml-webgpu: enable flash attention vec and tile version for broswer
* ggml-webgpu: stagging KV for flash attention tile version
* formatting
* turn on subgroup uniformity check
* remove Q_TILE as it is always 1 for vec path
* make row_max and exp_sum to local register
* make different bindings with same underlying buffer to have the same usage flags
* move path selection into the shader library and have the host consume a single flash-attn decision object.
* turn off skip_validation and address buffer overlapping when nwg==1
* formatting
* merge binding when kv overlap
* hexagon: restore HTP_OPMASK_QUEUE
* hexagon: honor OPMASK_SKIP_COMPUTE in hmx-matmul
* hex-prof: restore op profiling
* hex-prof: enable PMU
* hexagon: simplify and improve op-queuing with full profiling support
Add separate profile descriptors.
* hexagon: remove opsync and rename opmask into opstage
opsync is no longer needed since the profiler is fully async now.
opmask name was confusing and opstage is more accurate.
* hexagon: refactor opbatch queue handling
* hexagon: add iface hooks for enabling profiler from the host
Also move all the PMU setup stuff out of the hex-utils since it's not inteded for normal use.
* hexagon: make profiler mode configurable
On older devices getting PMU counters is expensive so it's now optional.
* hexagon: add support for setting profiler pmu events from env
* hexagon: simplify profiler output (no need to print buffs, etc)
* hexagon: simplify pmu counter formating
* hexagon: add a simple profile post-proc tool
* hex-prof: add support for reading logs from stdin
* hexagon: document GGML_HEXAGON_PROFILE
* hex-prof: update default width for dims field
* hex-prof: fix linter warnings and errors
* Update ggml/src/ggml-hexagon/htp/htp-ops.h
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update scripts/snapdragon/ggml-hexagon-profile.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Trivikram Reddy <tamarnat@qti.qualcomm.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Fixes#22237 — the find_library(MATH_LIBRARY m) result was being
discarded and the target linked against the literal 'm' string.
This prevents users from overriding the math library (e.g. for AMD AOCL)
via CMake variables. Now the discovered MATH_LIBRARY is used directly.
* sycl : fused MoE mul_mat_vec_q for TG
Create an MMVQ kernel so ggml_sycl_mul_mat_id can consolidate
n_experts_used matmuls in a single kernel launch. The kernel
also reads expert IDs directly, removing a per-call host sync.
This is similar to the CUDA backend's ggml_cuda_mul_mat_vec_q*
paths.
All types supported in the current MMVQ are supported here as well:
Q2_K, Q3_K, Q4_K, Q5_K, Q6_K, Q4_0, Q4_1, Q5_0, Q5_1, Q8_0
It will fall back to the existing per-expert path when src0 has been rewritten
by opt_for_reorder(), and for any shape the fused path doesn't handle.
test-backend-ops passes for supported type/shape combos.
Benchmark: Qwen3-Next-35B-A3B Q4_K_M on Intel Arc B70 (SYCL0),
baseline 707c0b7a6, 16k context, -fa 0.
build/bin/llama-bench -hf unsloth/Qwen3.5-35B-A3B-GGUF:Q4_K_M \
-p 1024 -n 128 -d 16384 -ngl 99 -fa 0 -ub 2048 -r 2 -dev SYCL0
Before (3 runs on 707c0b7a6):
| test | run 1 | run 2 | run 3 |
| --------------- | ----------------:| ----------------:| ----------------:|
| pp1024 @ d16384 | 533.26 ± 4.87 | 535.20 ± 2.78 | 524.27 ± 3.10 |
| tg128 @ d16384 | 33.47 ± 0.02 | 33.31 ± 0.02 | 33.17 ± 0.05 |
After (3 runs on 707c0b7a6 + this patch):
| test | run 1 | run 2 | run 3 |
| --------------- | ----------------:| ----------------:| ----------------:|
| pp1024 @ d16384 | 534.06 ± 0.97 | 531.95 ± 0.02 | 520.94 ± 20.10 |
| tg128 @ d16384 | 45.85 ± 0.21 | 45.95 ± 0.45 | 46.22 ± 0.12 |
disclosure: Claude wrote it, but I reviewed and understand the implementation
(albeit my C is a little rusty).
* sycl: also support nvfp4 and mxfp4 expert types
* sycl: terser comments/nested dispatch in response to review
* sycl: more comment cleanup in mmvq.cpp/hpp
---------
Co-authored-by: Debian <aaron@openllmi.net.bots.is>
* shader(im2col): implement the im2col shader
* shader(im2col): clean the formatting issues
* shader(im2col): clean the editorconfig checker warning
* fix(shader): address the workgroup issues of im2col and conv2d
In #11362 hip graph was disabled by default as, at the time, its performance impact was negative. Due to improvements in rocm and our usage and construction of graphs this is no longer true, so lets change the default.
* Only run webgpu CI on my fork
* Implement set_tensor_async
* Implement synchronize api
* Implement event creation and deletion API
* Cleanup
* Cleanup
* Comment out jobs for local CI run
* Add webgpu only workflow
* Delete .github/workflows/build-webgpu.yml
* Cleanup
* Cleanup
* Update API with function handlers
* Run clang-format
* Replace one-shot buffer with a direct queue.WriteBuffer using the buffer context
* fused rms_norm_mul + mul
* Add GGML_WEBGPU_DISABLE_FUSION for being able to disable kernel fusion.
* Decouple num_fused_ops from webgpu_context; misc cleanup
* Fix eps handling and remove disable_fusion.
* Fix not to use c++20 initializers.
* sycl: size mul_mat_id staging buffers by routed rows
Previously src1_contiguous/dst_contiguous in ggml_sycl_mul_mat_id were
sized to ggml_nelements(src1/dst), which over-allocates when ne12 > 1
and can fail with UR_RESULT_ERROR_OUT_OF_HOST_MEMORY on Level Zero for
MoE models (notably with --cpu-moe). Size them by the actual number of
routed rows (ids->ne[1] * n_ids) instead.
* sycl: add bf16 mul_mat fast path via DNNL
When src0 is BF16 (commonly the case for lm_head / output.weight), the
existing f16 path is skipped because bf16 isn't covered, and the f32
fallback dequantizes the entire src0 slab to f32 in a single pool alloc
(row_diff*ne00 floats). For large-vocab models this can reach several
GB and fail with UR_RESULT_ERROR_OUT_OF_HOST_MEMORY on Level Zero.
Add a bf16xbf16 -> f32 DNNL matmul fast path that uses the bf16 storage
in place and only materializes a small src1 bf16 conversion buffer. bf16
matmul accumulates in f32, so it's correct even when the op requests
GGML_PREC_F32 (as lm_head does).
- gemm.hpp: map bfloat16 to dnnl::memory::data_type::bf16.
- convert.{hpp,cpp}: expose ggml_get_to_bf16_sycl for f32/f16/bf16 -> bf16.
- ggml-sycl.cpp: take the bf16 path early in ggml_sycl_op_mul_mat_sycl
when DNNL and GGML_SYCL_HAS_BF16 are both available.
* ggml(webgpu): fix the busy-polls in Emscripten in the waitAny after #20618, and remove the busy webgpu log
* Merge with upstream
* Fix GET_ROWS packed integer NaN when using f16 as memory buffer in shader quants
* Update Unary wgsl EXP and EXPM1 for f16 stability
* Fix GET_ROWS IQ4_XS strcut for NaN f16 canonicalization
* Fix numerical percision for unary sqrt when working with f16
* Fix NaN canonicalization for packed integers using f16
* Update err threshold for binary div ops when using f16
* backend: Keep one Dawn/WebGPU instance alive for the lifetime of the static backend
* clean: uncomment existing code logs
* clean: clean the unncessary debug info
* Refactor and generalize dequant helpers
* Remove deprecated quant structs
* Refactor shader defines to reduce repetition
* Remove error override for F16 type
* fix: fix the accidential removal of the proper initialization of ctx
* clean: clean legacy and format code
* fix: did not modify tests ops
* shader(conv2d): add conv2d shader kernels and pass f32 and f16 tests
* shader(conv2d): fix the out of bounds memory access in the weight indexing
* shader(conv2d): clean unused variables and optimize the computation
* merge: use the new entries function
* clean: address the formatting issues
* clean: address the warning issues
* clear: clean the shader editorconfig-checker issues
* clear: clean the shader editorconfig-checker with utf-8
---------
Co-authored-by: Jeremy J. Hartmann <jeremy@mtion.tv>
* Thread safety per request only
* Fix ROPE yarn case
* Fix sticky stateful config
* Use i4/i8 directly for symmetric quant
* Use weightless caching
* Add WeightlessCacheAttribute to reduce NPU memory usage
* Gelu tanh support (#125)
* Imrope support (#126)
* fix(openvino): explicit ov::Tensor frees in ggml_backend_openvino_free
* add GPU,NPU support in OV Dockerfile
* add build-openvino.yml ci
* Fix sticky stateful config
* add concurrency to ov-gpu ci runs. Move OV CI to build-openvino.yml
* fix thread-safety of shared runtime context
* rope type abstraction for frontend translations
* fix editorconfig
---------
Co-authored-by: Mustafa Cavus <mustafa.cavus@intel.com>
Co-authored-by: Dan Hoffman <dhoff749@gmail.com>
Co-authored-by: Ravi Panchumarthy <ravi.panchumarthy@intel.com>
* Fix delayed AllReduce on Gemma-4 MoE
Skip forward past nodes that don't consume the current one, and allow a chain of MULs.
* Check for all sources before skipping nodes
* Address review comments
* Implemented optimized q1_0 dot for x86 and generic
* Removed redundant helper definition
* Removed two redundant instructions from AVX q1_0 dot
* Fixed inconsistency with fp16 conversion for generic q1_0 dot and deduplicated generic fallback
* Style cleanup around AVX q1_0 dot
* Replaced explicitly unrolled blocks with inner for loop for q1_0
* Replaced scalar ARM q1_0 impl with new generic one