Logo
Explore Help
Register Sign In
sleepy/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
Code Issues 11 Pull Requests Actions 109 Packages Projects Releases Wiki Activity
Files
d13d60af1d7fb9293a82af9e545a1e21133555ce
llama.cpp/gguf-py/gguf
T
History
Sigbjørn Skjæret d13d60af1d gguf-py : cleaner way to get the first key (#20727)
2026-03-18 23:21:42 +01:00
..
scripts
ggml : add NVFP4 quantization type support (#19769)
2026-03-11 21:02:54 +01:00
__init__.py
convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499)
2024-07-18 20:40:15 +10:00
constants.py
model: mistral small 4 support (#20649)
2026-03-17 00:31:14 +01:00
gguf_reader.py
ggml/gguf : prevent integer overflows (#19856)
2026-02-24 20:17:11 +02:00
gguf_writer.py
gguf-py : cleaner way to get the first key (#20727)
2026-03-18 23:21:42 +01:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
2023-11-11 08:04:50 +03:00
lazy.py
convert : handle compressed-tensors quant method (#17069)
2025-11-09 09:45:50 -05:00
metadata.py
chore : correct typos [no ci] (#20041)
2026-03-05 08:50:21 +01:00
py.typed
convert : various script cleanups/fixes + merges and special token handling (#2842)
2023-08-30 11:25:50 +03:00
quants.py
ggml : add NVFP4 quantization type support (#19769)
2026-03-11 21:02:54 +01:00
tensor_mapping.py
llama : add support for Nemotron 3 Super (#20411)
2026-03-11 19:27:53 +01:00
utility.py
gguf-py : do not align the data start offset (#18291)
2025-12-22 20:25:16 +01:00
vocab.py
convert : support latest mistral-common (fix conversion with --mistral-format) (#17712)
2025-12-03 21:15:04 +01:00
Powered by Gitea Version: 1.26.1 Page: 282ms Template: 5ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API