Logo
Explore Help
Register Sign In
sleepy/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
Code Issues 16 Pull Requests Actions 74 Packages Projects Releases Wiki Activity
Files
0eb4e12beebabae46d37b78742f4c5d4dbe52dc1
llama.cpp/ggml/src/ggml-cpu
T
History
Diego Devesa 5931c1f233 ggml : add support for dynamic loading of backends (#10469)
* ggml : add support for dynamic loading of backends

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-11-25 15:13:39 +01:00
..
cmake
ggml : build backends as libraries (#10256)
2024-11-14 18:04:35 +01:00
llamafile
llamafile : fix include path (#0)
2024-11-16 20:36:26 +02:00
CMakeLists.txt
ggml : add support for dynamic loading of backends (#10469)
2024-11-25 15:13:39 +01:00
ggml-cpu-aarch64.c
ggml : optimize Q4_0 into Q4_0_X_Y repack (#10324)
2024-11-16 01:53:37 +01:00
ggml-cpu-aarch64.h
backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (#9921)
2024-11-15 01:28:50 +01:00
ggml-cpu-impl.h
ggml : build backends as libraries (#10256)
2024-11-14 18:04:35 +01:00
ggml-cpu-quants.c
AVX BF16 and single scale quant optimizations (#10212)
2024-11-15 12:47:58 +01:00
ggml-cpu-quants.h
ggml : build backends as libraries (#10256)
2024-11-14 18:04:35 +01:00
ggml-cpu.c
ggml : add support for dynamic loading of backends (#10469)
2024-11-25 15:13:39 +01:00
ggml-cpu.cpp
ggml : add support for dynamic loading of backends (#10469)
2024-11-25 15:13:39 +01:00
Powered by Gitea Version: 1.26.1 Page: 173ms Template: 1ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API