This website requires JavaScript.
Explore
Help
Register
Sign In
sleepy
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
Code
Issues
16
Pull Requests
Actions
226
Packages
Projects
Releases
Wiki
Activity
Files
e85e9d7637268906d3fc75ec65bd2ef6ebea3a54
llama.cpp
/
examples
/
model-conversion
/
scripts
/
causal
T
History
Piotr Wilkin (ilintar)
8faa87db02
Extend run-org-model.py, add (a) batching (b) loading prompt from file (c) multimodal capacity (
#18034
)
2025-12-17 14:21:51 +01:00
..
compare-embeddings-logits.sh
model-conversion : remove hardcoded /bin/bash shebangs [no ci] (
#15765
)
2025-09-03 12:50:47 +02:00
compare-logits.py
model-conversion : use CONVERTED_MODEL value for converted model [no ci] (
#17984
)
2025-12-13 08:34:26 +01:00
convert-model.sh
model-conversion : remove hardcoded /bin/bash shebangs [no ci] (
#15765
)
2025-09-03 12:50:47 +02:00
modelcard.template
model-conversion : remove -fa option in model card template [no ci] (
#18088
)
2025-12-16 13:25:09 +01:00
run-casual-gen-embeddings-org.py
model-conversion : fix pyright errors (
#15770
)
2025-09-03 18:28:36 +02:00
run-converted-model-embeddings-logits.sh
model-conversion : remove hardcoded /bin/bash shebangs [no ci] (
#15765
)
2025-09-03 12:50:47 +02:00
run-converted-model.sh
model : Qwen3 Next (
#16095
)
2025-11-28 12:02:56 +01:00
run-org-model.py
Extend run-org-model.py, add (a) batching (b) loading prompt from file (c) multimodal capacity (
#18034
)
2025-12-17 14:21:51 +01:00