Logo
Explore Help
Register Sign In
sleepy/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
Code Issues 11 Pull Requests Actions 114 Packages Projects Releases Wiki Activity
Files
20a758155bc5f37290b20ea44d76ba99c4e7f2cb
llama.cpp/.devops
T
History
Diego Devesa 20a758155b docker : fix CPU ARM build (#11403)
* docker : fix CPU ARM build

* add CURL to other builds
2025-01-25 15:22:29 +01:00
..
nix
nix: allow to override rocm gpu targets (#10794)
2024-12-14 10:17:36 -08:00
cloud-v-pipeline
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
2024-06-13 00:41:52 +01:00
cpu.Dockerfile
docker : fix CPU ARM build (#11403)
2025-01-25 15:22:29 +01:00
cuda.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
intel.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
llama-cli-cann.Dockerfile
docker: use GGML_NATIVE=OFF (#10368)
2024-11-18 00:21:53 +01:00
llama-cpp-cuda.srpm.spec
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139)
2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
2024-06-13 00:41:52 +01:00
musa.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
rocm.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
tools.sh
fix: graceful shutdown for Docker images (#10815)
2024-12-13 18:23:50 +01:00
vulkan.Dockerfile
devops : add docker-multi-stage builds (#10832)
2024-12-22 23:22:58 +01:00
Powered by Gitea Version: 1.26.1 Page: 154ms Template: 1ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API