Files
llama.cpp/tools
JJJYmmm fc0fe40049 models : support qwen3.5 series (#19468)
* support qwen3.5 series

* remove deepstack for now, and some code clean

* code clean

* add FULL_ATTENTION_INTERVAL metadata

* code clean

* reorder v heads for linear attention to avoid expensive interleaved repeat
2026-02-10 18:00:26 +02:00
..
2026-02-02 08:38:55 +02:00
2026-02-08 09:06:45 +01:00