Logo
Explore Help
Register Sign In
sleepy/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
Code Issues 9 Pull Requests Actions 103 Packages Projects Releases Wiki Activity
Files
3595ae5963f1583f53beecf9725c919d309e15da
llama.cpp/ggml
T
History
Johannes Gäßler e70e640db3 CUDA: Blackwell features for non-native builds (#18436)
2025-12-29 09:35:42 +01:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094)
2025-08-07 13:45:41 +02:00
include
llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653)
2025-12-15 09:24:59 +01:00
src
CUDA: Blackwell features for non-native builds (#18436)
2025-12-29 09:35:42 +01:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
cmake: Added more x86_64 CPU backends when building with GGML_CPU_ALL_VARIANTS=On (#18186)
2025-12-28 09:33:29 +02:00
Powered by Gitea Version: 1.26.1 Page: 433ms Template: 2ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API