Logo
Explore Help
Register Sign In
sleepy/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
Code Issues 11 Pull Requests Actions 107 Packages Projects Releases Wiki Activity
Files
d7a14c42a1883a34a6553cbfe30da1e1b84dfd6a
llama.cpp/cmake
T
History
Diego Devesa d7a14c42a1 build : fix build info on windows (#13239)
* build : fix build info on windows

* fix cuda host compiler msg
2025-05-01 21:48:08 +02:00
..
arm64-apple-clang.cmake
Add apple arm to presets (#10134)
2024-11-02 15:35:31 -07:00
arm64-windows-llvm.cmake
ggml : prevent builds with -ffinite-math-only (#7726)
2024-06-04 17:01:09 +10:00
arm64-windows-msvc.cmake
Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (#7191)
2024-05-16 12:47:36 +10:00
build-info.cmake
build : fix build info on windows (#13239)
2025-05-01 21:48:08 +02:00
common.cmake
cmake : enable building llama.cpp using system libggml (#12321)
2025-03-17 11:05:23 +02:00
git-vars.cmake
llama : reorganize source code + improve CMake (#8006)
2024-06-26 18:33:02 +03:00
llama-config.cmake.in
cmake: add hints for locating ggml on Windows using Llama find-package (#11466)
2025-01-28 19:22:06 -04:00
llama.pc.in
build : fix llama.pc (#11658)
2025-02-06 13:08:13 +02:00
x64-windows-llvm.cmake
Changes to CMakePresets.json to add ninja clang target on windows (#10668)
2024-12-09 09:40:19 -08:00
Powered by Gitea Version: 1.26.1 Page: 236ms Template: 1ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API