mtmd, llama : Update HunyuanVL vision-language model support (#22037)

* mtmd, llama : add HunyuanVL vision-language model support

- add LLM_ARCH_HUNYUAN_VL with M-RoPE (XD-RoPE) support
- add PROJECTOR_TYPE_HUNYUANVL with PatchMerger vision encoder
- add HunyuanVL-specific M-RoPE position encoding for image tokens
- add GGUF conversion for HunyuanVL vision and text models
- add smoke test in tools/mtmd/tests.sh

* fix: fix HunyuanVL XD-RoPE h/w section order

* fix: Remove redundant code

* convert : fix HunyuanOCR / HunyuanVL conversion
 - Tested locally: both HunyuanOCR and HunyuanVL-4B convert to GGUF
 - successfully and produce correct inference output on Metal (F16 / Q8_0).

* clip : fix -Werror=misleading-indentation in bilinear resize

* fix CI: convert_hf_to_gguf type check error
 - convert_hf_to_gguf.py: give HunyuanVLTextModel.__init__ an explicit `dir_model: Path` parameter so ty can infer the type for load_hparams instead of reporting `Unknown | None`.

---------

Co-authored-by: wendadawen <wendadawen@tencent.com>
This commit is contained in:
manayang
2026-04-22 17:58:43 +08:00
committed by GitHub
parent 750579ff14
commit 7bfe60fdf9
13 changed files with 336 additions and 27 deletions
+15 -1
View File
@@ -5,7 +5,21 @@ ggml_cgraph * clip_graph_hunyuanocr::build() {
const int pw = n_patches_x;
const int ph = n_patches_y;
ggml_tensor * pos_embd = resize_position_embeddings(GGML_SCALE_MODE_BILINEAR);
// Position embedding interpolation.
// HunyuanVL needs scale factors sf=(target+0.1)/n_grid, which the standard
// ggml_interpolate cannot express. To avoid adding a new ggml op, the
// resize is computed on CPU in clip_image_batch_encode and uploaded here
// as a graph input (named "hunyuanvl_pos_embd").
// HunyuanOCR uses the same square layout and the standard ratio-based
// interpolation provided by resize_position_embeddings().
ggml_tensor * pos_embd = nullptr;
if (proj_type == PROJECTOR_TYPE_HUNYUANVL && model.position_embeddings) {
pos_embd = ggml_new_tensor_2d(ctx0, GGML_TYPE_F32, n_embd, ph * pw);
ggml_set_name(pos_embd, "hunyuanvl_pos_embd");
ggml_set_input(pos_embd);
} else {
pos_embd = resize_position_embeddings(GGML_SCALE_MODE_BILINEAR);
}
ggml_tensor * inp = build_inp();
ggml_tensor * cur = build_vit(inp, n_patches, NORM_TYPE_NORMAL, hparams.ffn_op, pos_embd, nullptr);