Files
llama.cpp/ggml/include
Georgi Gerganov 365a3e8c31 ggml : add ggml_build_forward_select (#18550)
* ggml : add ggml_build_forward_select

* cuda : adapt CUDA graph compat to new feature

* vulkan : update logic to handle command buffer closing

* ggml : check compute for fusion

* ggml : add comment
2026-01-19 20:03:19 +02:00
..
2025-12-05 19:39:04 +02:00