This website requires JavaScript.
Explore
Help
Register
Sign In
sleepy
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
Code
Issues
11
Pull Requests
Actions
109
Packages
Projects
Releases
Wiki
Activity
Files
f40a80b4f3cd00c4c405c45b7f316f7e77352323
llama.cpp
/
ggml
T
History
Neo Zhang
f40a80b4f3
support bf16 and quantized type (
#20803
)
2026-03-22 22:06:27 +08:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
ggml : restore ggml_type_sizef() to aboid major version bump (ggml/1441)
2026-03-18 15:17:28 +02:00
src
support bf16 and quantized type (
#20803
)
2026-03-22 22:06:27 +08:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml : bump version to 0.9.8 (ggml/1442)
2026-03-18 15:17:28 +02:00