98d2d2884e
* feat: (vocab) fix stray text appended in llama_decode_text Remove accidental concatenation of the full `text` string when formatting UNK_BYTE hex escapes. Only the closing "]" should be appended. * feat(mtmd): add Yasa2 vision encoder support Add a Yasa2 (ConvNeXtV2-based) vision encoder for reka-edge: - Register PROJECTOR_TYPE_YASA2 and tensor name definitions - Add yasa2_block/yasa2_stage model structs - Implement graph builder with ConvNeXt stages, GRN, adaptive pooling - Wire into clip.cpp switch statements and mtmd.cpp init_vision - Use mtmd_image_preprocessor_fixed_size for image preprocessing * feat(chat): add reka-edge template handler (tools, thinking) - Add chat-reka.cpp/h implementing PEG-based parser for reka-edge format - Add Reka-Edge.jinja chat template - Detect reka-edge template in try_specialized_template() - Add LLAMA_EXAMPLE_MTMD to chat-template-file arg * feat: add reka vlm to gguf conversion script Converts Reka Yasa2 hf checkpoints to GGUF format: - Text decoder: Llama-arch with tiktoken/BPE vocab - Mmproj (--mmproj): ConvNeXt vision backbone + language_projection - Generates 2D sincos positional embeddings for vision encoder * test: add Reka Edge chat template and parser tests - test-chat-template: oracle tests comparing Jinja engine output vs common_chat_templates_apply for text, tools, thinking, images, video - test-chat: PEG parser tests for Reka Edge format, round-trip tests for image/video content parts, common path integration tests * scripts: add Reka Edge mixed quantization helper Q4_0 base quantization with Q8_0 override for the last 8 transformer blocks (layers 24-31) via --tensor-type regex. * fix: adapt chat-reka and tests to upstream API - Use autoparser::generation_params (not templates_params) - Add p.prefix(generation_prompt) to PEG parser - Simplify reasoning parser to match LFM2 pattern - Remove image/video oracle tests (unsupported by oaicompat parser; no other multimodal models test this path) * fix: avoid duplicate tensor loading in yasa2 vision encoder TN_YASA_PATCH_W and TN_PATCH_EMBD both resolve to "v.patch_embd.weight", causing the same tensor to be loaded twice into ctx_data and overflowing the memory pool. Reuse the tensors already loaded by the common section. * chore: update image pre-processing settings The reka-edge model depends on the following settings in an older fork of llama.cpp: 1. Fixed square resize 2. BICUBIC 3. add_padding=false In current llama.cpp, this means setting: - image_resize_algo = RESIZE_ALGO_BICUBIC - image_resize_pad = false * chore: remove reka gguf conversion script * chore: remove reka quantization script * chore: remove unnecessary changes from PR scope This commit removes a couple of unnecessary changes for the PR scope: 1. BPE decoder bug fix - this affects reka edge because there's a bug in our tokenization that doesn't represent <think> tokens as special tokens. However this isn't meant to be a thinking model so when run with --reasoning off the edge case does not affect us 2. --chat-template-file support from llama-mtmd-cli - the focus is on llama-server and the reka edge gguf contains the necessary metadata to detect the chat template 3. reka edge oracle test cases - no other model has similar test cases, so I removed it for standardization * chore: remove unnecessary ggml_cast This commit removes unnecessary ggml_cast after updating the reka vlm -> gguf conversion script on hugging face. * chore: remove redundant code * chore: remove unnecessary ggml_cont calls This commit removes all ggml_cont calls except the four that precede ggml_reshape_3d/ggml_reshape_4d. Those are necessary because ggml_reshape recomputes strides assuming contiguous layout and asserts ggml_is_contiguous. Other operations (ggml_mean, ggml_add, ggml_mul etc.) use stride-based indexing and handle non-contiguous inputs correctly and so we are ok to remove ggml_cont for those. * chore: remove unnecessary ggml_repeat calls This commit removes unnecessary ggml_repeat calls because the underlying ops already broadcast automatically. Every ggml_repeat in yasa2.cpp was expanding a smaller tensor to match a larger one's shape before passing both to an elementwise op (ggml_add, ggml_sub, ggml_mul, or ggml_div). This is unnecessary because all four of these ops already support broadcasting internally. * chore: restore ggml_cont needed for cpu operations * refactor: locate reka chat template handler in chat.cpp * chore: remove unnecessary warmup tokens * chore: add code comments on image_resize_pad * chore: remove custom reka parsing code * chore: revert common/chat.cpp * Uncomment debug logging for PEG input parsing --------- Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
123 lines
3.8 KiB
CMake
123 lines
3.8 KiB
CMake
# mtmd
|
|
|
|
find_package(Threads REQUIRED)
|
|
|
|
add_library(mtmd
|
|
mtmd.cpp
|
|
mtmd-audio.cpp
|
|
mtmd-image.cpp
|
|
mtmd.h
|
|
mtmd-helper.cpp
|
|
mtmd-helper.h
|
|
clip.cpp
|
|
clip.h
|
|
clip-impl.h
|
|
clip-model.h
|
|
clip-graph.h
|
|
models/models.h
|
|
models/cogvlm.cpp
|
|
models/conformer.cpp
|
|
models/dotsocr.cpp
|
|
models/gemma4a.cpp
|
|
models/gemma4v.cpp
|
|
models/glm4v.cpp
|
|
models/hunyuanocr.cpp
|
|
models/internvl.cpp
|
|
models/kimivl.cpp
|
|
models/kimik25.cpp
|
|
models/nemotron-v2-vl.cpp
|
|
models/llama4.cpp
|
|
models/llava.cpp
|
|
models/minicpmv.cpp
|
|
models/paddleocr.cpp
|
|
models/pixtral.cpp
|
|
models/qwen2vl.cpp
|
|
models/qwen3vl.cpp
|
|
models/qwen3a.cpp
|
|
models/step3vl.cpp
|
|
models/siglip.cpp
|
|
models/whisper-enc.cpp
|
|
models/deepseekocr.cpp
|
|
models/mobilenetv5.cpp
|
|
models/youtuvl.cpp
|
|
models/yasa2.cpp
|
|
)
|
|
|
|
set_target_properties(mtmd PROPERTIES
|
|
VERSION ${LLAMA_INSTALL_VERSION}
|
|
SOVERSION 0
|
|
MACHO_CURRENT_VERSION 0 # keep macOS linker from seeing oversized version number
|
|
)
|
|
|
|
target_link_libraries (mtmd PUBLIC ggml llama)
|
|
target_link_libraries (mtmd PRIVATE Threads::Threads)
|
|
target_include_directories(mtmd PUBLIC .)
|
|
target_include_directories(mtmd PRIVATE ../..)
|
|
target_include_directories(mtmd PRIVATE ../../vendor)
|
|
target_compile_features (mtmd PRIVATE cxx_std_17)
|
|
|
|
if (BUILD_SHARED_LIBS)
|
|
set_target_properties (mtmd PROPERTIES POSITION_INDEPENDENT_CODE ON)
|
|
target_compile_definitions(mtmd PRIVATE LLAMA_BUILD)
|
|
target_compile_definitions(mtmd PUBLIC LLAMA_SHARED)
|
|
endif()
|
|
|
|
set(MTMD_PUBLIC_HEADERS
|
|
${CMAKE_CURRENT_SOURCE_DIR}/mtmd.h
|
|
${CMAKE_CURRENT_SOURCE_DIR}/mtmd-helper.h
|
|
)
|
|
|
|
set_target_properties(mtmd
|
|
PROPERTIES
|
|
PUBLIC_HEADER "${MTMD_PUBLIC_HEADERS}")
|
|
|
|
set_target_properties(mtmd
|
|
PROPERTIES
|
|
PRIVATE_HEADER debug/mtmd-debug.h)
|
|
|
|
install(TARGETS mtmd LIBRARY PUBLIC_HEADER)
|
|
|
|
if (NOT MSVC)
|
|
# for stb_image.h and miniaudio.h
|
|
target_compile_options(mtmd PRIVATE -Wno-cast-qual)
|
|
endif()
|
|
|
|
if (ANDROID)
|
|
# miniaudio.h defines ma_android_sdk_version() without a prior prototype
|
|
target_compile_options(mtmd PRIVATE -Wno-missing-prototypes)
|
|
endif()
|
|
|
|
if (TARGET BUILD_INFO)
|
|
add_dependencies(mtmd BUILD_INFO)
|
|
add_dependencies(mtmd-helper BUILD_INFO)
|
|
endif()
|
|
|
|
# if mtmd is linked against llama-common, we throw an error
|
|
if (TARGET mtmd)
|
|
get_target_property(libs mtmd LINK_LIBRARIES)
|
|
if (libs AND "llama-common" IN_LIST libs)
|
|
message(FATAL_ERROR "mtmd is designed to be a public library.\n"
|
|
"It must not link against llama-common")
|
|
endif()
|
|
endif()
|
|
|
|
add_executable(llama-llava-cli deprecation-warning.cpp)
|
|
add_executable(llama-gemma3-cli deprecation-warning.cpp)
|
|
add_executable(llama-minicpmv-cli deprecation-warning.cpp)
|
|
add_executable(llama-qwen2vl-cli deprecation-warning.cpp)
|
|
|
|
set(TARGET llama-mtmd-cli)
|
|
add_executable (${TARGET} mtmd-cli.cpp)
|
|
set_target_properties (${TARGET} PROPERTIES OUTPUT_NAME llama-mtmd-cli)
|
|
if(LLAMA_TOOLS_INSTALL)
|
|
install(TARGETS ${TARGET} RUNTIME)
|
|
endif()
|
|
target_link_libraries (${TARGET} PRIVATE llama-common mtmd Threads::Threads)
|
|
target_compile_features(${TARGET} PRIVATE cxx_std_17)
|
|
|
|
# mtmd-debug tool
|
|
add_executable(llama-mtmd-debug debug/mtmd-debug.cpp)
|
|
set_target_properties(llama-mtmd-debug PROPERTIES OUTPUT_NAME llama-mtmd-debug)
|
|
target_link_libraries(llama-mtmd-debug PRIVATE llama-common mtmd Threads::Threads)
|
|
target_compile_features(llama-mtmd-debug PRIVATE cxx_std_17)
|