Logo
Explore Help
Register Sign In
sleepy/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
Code Issues 11 Pull Requests Actions 109 Packages Projects Releases Wiki Activity
Files
f40a80b4f3cd00c4c405c45b7f316f7e77352323
llama.cpp/ggml
T
History
Neo Zhang f40a80b4f3 support bf16 and quantized type (#20803)
2026-03-22 22:06:27 +08:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094)
2025-08-07 13:45:41 +02:00
include
ggml : restore ggml_type_sizef() to aboid major version bump (ggml/1441)
2026-03-18 15:17:28 +02:00
src
support bf16 and quantized type (#20803)
2026-03-22 22:06:27 +08:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml : bump version to 0.9.8 (ggml/1442)
2026-03-18 15:17:28 +02:00
Powered by Gitea Version: 1.26.1 Page: 347ms Template: 2ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API