This website requires JavaScript.
Explore
Help
Register
Sign In
sleepy
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
Code
Issues
11
Pull Requests
Actions
114
Packages
Projects
Releases
Wiki
Activity
Files
0dedb9ef7a71fcebfa6fb17e0d6e6abd6e893376
llama.cpp
/
ggml
T
History
Aparna M P
0dedb9ef7a
hexagon: add support for FILL op (
#22198
)
...
Co-authored-by: Max Krasnyansky <
maxk@qti.qualcomm.com
>
2026-04-21 16:24:20 -07:00
..
cmake
ggml: backend-agnostic tensor parallelism (experimental) (
#19378
)
2026-04-09 16:42:19 +02:00
include
CUDA: manage NCCL communicators in context (
#21891
)
2026-04-15 15:58:40 +02:00
src
hexagon: add support for FILL op (
#22198
)
2026-04-21 16:24:20 -07:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml : bump version to 0.10.0 (ggml/1463)
2026-04-21 11:04:21 +03:00