This website requires JavaScript.
Explore
Help
Register
Sign In
sleepy
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
Code
Issues
8
Pull Requests
Actions
170
Packages
Projects
Releases
Wiki
Activity
Files
33a56f90a6a793a3c7b1f6ca39ff43a1cecd0b61
llama.cpp
/
docs
/
backend
T
History
TriDefender
313493de53
docs : update path in snapdragon README.md (
#19533
)
...
paths changed so original example didn't work
2026-02-12 08:13:51 +01:00
..
snapdragon
docs : update path in snapdragon README.md (
#19533
)
2026-02-12 08:13:51 +01:00
VirtGPU
ggml-virtgpu: add backend documentation (
#19354
)
2026-02-09 20:15:42 +08:00
BLIS.md
make : deprecate (
#10514
)
2024-12-02 21:22:53 +02:00
CANN.md
CANN: add operator fusion support for ADD + RMS_NORM (
#17512
)
2026-01-05 15:38:18 +08:00
CUDA-FEDORA.md
docs: update: improve the Fedoa CUDA guide (
#12536
)
2025-03-24 11:02:26 +00:00
OPENCL.md
docs: add linux to index (
#18907
)
2026-01-18 18:03:35 +08:00
SYCL.md
Remove support for Nvidia & AMD GPU, because the oneAPI plugin for Nvidia & AMD GPU is unavailable: download/installation channels are out of work. (
#19246
)
2026-02-02 21:06:21 +08:00
VirtGPU.md
ggml-virtgpu: add backend documentation (
#19354
)
2026-02-09 20:15:42 +08:00
zDNN.md
ggml-zendnn : add ZenDNN backend for AMD CPUs (
#17690
)
2025-12-07 00:13:33 +08:00
ZenDNN.md
ggml-zendnn : add ZenDNN backend for AMD CPUs (
#17690
)
2025-12-07 00:13:33 +08:00