Logo
Explore Help
Register Sign In
sleepy/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
Code Issues 11 Pull Requests Actions 114 Packages Projects Releases Wiki Activity
Files
e68aa10d8f3d26fdad5b912540362d79de5460e3
llama.cpp/docs
T
History
Aaron Teo 186415d595 ggml-cpu: drop support for nnpa intrinsics (#15821)
2025-09-06 11:27:28 +08:00
..
backend
CANN: Fix precision issue on 310I DUO multi-devices (#15784)
2025-09-04 15:12:30 +08:00
development
docs : update HOWTO‑add‑model.md for ModelBase and new model classes (#14874)
2025-07-25 16:25:05 +02:00
multimodal
model : support MiniCPM-V 4.5 (#15575)
2025-08-26 10:05:55 +02:00
ops
ggml: initial IBM zDNN backend (#14975)
2025-08-15 21:11:22 +08:00
android.md
repo : update links to new url (#11886)
2025-02-15 16:40:57 +02:00
build-s390x.md
ggml-cpu: drop support for nnpa intrinsics (#15821)
2025-09-06 11:27:28 +08:00
build.md
Update build.md to remove MSVC arm64 notes (#15684)
2025-08-30 23:51:28 +08:00
docker.md
musa: upgrade musa sdk to rc4.2.0 (#14498)
2025-07-24 20:05:37 +01:00
function-calling.md
server : add documentation for parallel_tool_calls param (#15647)
2025-08-29 20:25:40 +03:00
install.md
docs : add "Quick start" section for new users (#13862)
2025-06-03 13:09:36 +02:00
llguidance.md
llguidance build fixes for Windows (#11664)
2025-02-14 12:46:08 -08:00
multimodal.md
mtmd : add support for Voxtral (#14862)
2025-07-28 15:01:48 +02:00
ops.md
ggml: initial IBM zDNN backend (#14975)
2025-08-15 21:11:22 +08:00
Powered by Gitea Version: 1.26.1 Page: 285ms Template: 12ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API