Logo
Explore Help
Register Sign In
sleepy/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
Code Issues 9 Pull Requests Actions 103 Packages Projects Releases Wiki Activity
Files
19821178be599edaf6a30d12efeaf835e7162995
llama.cpp/ggml
T
History
Jeff Bolz 19821178be vulkan: add barrier after writetimestamp (#21865)
2026-04-28 12:28:12 +02:00
..
cmake
ggml: backend-agnostic tensor parallelism (experimental) (#19378)
2026-04-09 16:42:19 +02:00
include
CUDA: manage NCCL communicators in context (#21891)
2026-04-15 15:58:40 +02:00
src
vulkan: add barrier after writetimestamp (#21865)
2026-04-28 12:28:12 +02:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
HIP: flip GGML_HIP_GRAPHS to default on (#22254)
2026-04-23 02:34:31 +02:00
Powered by Gitea Version: 1.26.1 Page: 230ms Template: 1ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API