This website requires JavaScript.
Explore
Help
Register
Sign In
sleepy
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
Code
Issues
11
Pull Requests
Actions
114
Packages
Projects
Releases
Wiki
Activity
Files
3409ab842d988c3e0dc0d3110dd793a5e187d37e
llama.cpp
/
ggml
T
History
Jeff Bolz
3409ab842d
vulkan: Set k_load_shmem to false when K is too large (
#19301
)
2026-02-05 08:48:33 +01:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
ggml-virtgpu: make the code thread safe (
#19204
)
2026-02-04 10:46:18 +08:00
src
vulkan: Set k_load_shmem to false when K is too large (
#19301
)
2026-02-05 08:48:33 +01:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
Bump cmake max version (needed for Windows on Snapdragon builds) (
#19188
)
2026-02-01 14:13:38 -08:00