This website requires JavaScript.
Explore
Help
Register
Sign In
sleepy
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
Code
Issues
8
Pull Requests
1
Actions
176
Packages
Projects
Releases
Wiki
Activity
Files
f5e7734ff2e1d2e22015f4a9da9a52c70240a064
llama.cpp
/
docs
/
ops
T
History
Nechama Krashinski
537eadb1b9
sycl: add F16 support for GGML_OP_CEIL (
#19306
)
...
* Fix SYCL CEIL operator * sycl: implement GGML_OP_CEIL
2026-02-06 23:13:44 +08:00
..
BLAS.csv
docs(ggml): update backend ops (
#18734
)
2026-01-10 18:48:17 +08:00
CANN.csv
docs : update ops.md for CANN backend (
#18654
)
2026-01-16 13:32:17 +01:00
CPU.csv
docs : update cpu and cuda ops (
#17890
)
2025-12-09 23:31:29 +01:00
CUDA.csv
docs : update cpu and cuda ops (
#17890
)
2025-12-09 23:31:29 +01:00
Metal.csv
metal : add count_equal op (
#18314
)
2025-12-31 10:39:48 +02:00
OpenCL.csv
docs : update opencl ops (
#17904
)
2025-12-10 15:20:00 +01:00
SYCL.csv
sycl: add F16 support for GGML_OP_CEIL (
#19306
)
2026-02-06 23:13:44 +08:00
Vulkan.csv
ops.md: update vulkan support (
#17661
)
2025-12-01 15:26:21 -06:00
WebGPU.csv
ggml webgpu: support for backend sampling (
#18880
)
2026-01-16 16:12:43 -08:00
zDNN.csv
docs(ggml): update backend ops (
#18734
)
2026-01-10 18:48:17 +08:00
ZenDNN.csv
ggml-zendnn : add ZenDNN backend for AMD CPUs (
#17690
)
2025-12-07 00:13:33 +08:00