Logo
Explore Help
Register Sign In
sleepy/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
Code Issues 9 Pull Requests Actions 103 Packages Projects Releases Wiki Activity
Files
0c6ee1cadee96d3ba6e06afb0c001d33f413a0b0
llama.cpp/ggml
T
History
Sigbjørn Skjæret 0c6ee1cade ggml-cpu : re-enable fast gelu_quick_f16 (#22339)
2026-04-26 09:28:14 +03:00
..
cmake
ggml: backend-agnostic tensor parallelism (experimental) (#19378)
2026-04-09 16:42:19 +02:00
include
CUDA: manage NCCL communicators in context (#21891)
2026-04-15 15:58:40 +02:00
src
ggml-cpu : re-enable fast gelu_quick_f16 (#22339)
2026-04-26 09:28:14 +03:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
HIP: flip GGML_HIP_GRAPHS to default on (#22254)
2026-04-23 02:34:31 +02:00
Powered by Gitea Version: 1.26.1 Page: 252ms Template: 1ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API