Logo
Explore Help
Register Sign In
sleepy/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
Code Issues 11 Pull Requests Actions 108 Packages Projects Releases Wiki Activity
Files
b760272f1a25fcae065d827ce2cbcaa035597b02
llama.cpp/ggml
T
History
Trivikram Reddy b760272f1a hexagon: guard HMX clock request for v75+ platforms (#22377)
2026-04-25 17:58:26 -07:00
..
cmake
ggml: backend-agnostic tensor parallelism (experimental) (#19378)
2026-04-09 16:42:19 +02:00
include
CUDA: manage NCCL communicators in context (#21891)
2026-04-15 15:58:40 +02:00
src
hexagon: guard HMX clock request for v75+ platforms (#22377)
2026-04-25 17:58:26 -07:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
HIP: flip GGML_HIP_GRAPHS to default on (#22254)
2026-04-23 02:34:31 +02:00
Powered by Gitea Version: 1.26.1 Page: 222ms Template: 1ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API