This website requires JavaScript.
Explore
Help
Register
Sign In
sleepy
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
Code
Issues
16
Pull Requests
Actions
73
Packages
Projects
Releases
Wiki
Activity
Files
6f53d8a6b41e48c73b345fc6c712c3b00ea4fb93
llama.cpp
/
.devops
T
History
Nuno
6f53d8a6b4
docker: add missing vulkan library to base layer and update to 24.04 (
#11422
)
...
Signed-off-by: rare-magma <
rare-magma@posteo.eu
>
2025-01-26 18:22:43 +01:00
..
nix
nix: allow to override rocm gpu targets (
#10794
)
2024-12-14 10:17:36 -08:00
cloud-v-pipeline
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (
#7809
)
2024-06-13 00:41:52 +01:00
cpu.Dockerfile
docker : add GGML_CPU_ARM_ARCH arg to select ARM architecture to build for (
#11419
)
2025-01-25 17:22:41 +01:00
cuda.Dockerfile
devops : add docker-multi-stage builds (
#10832
)
2024-12-22 23:22:58 +01:00
intel.Dockerfile
devops : add docker-multi-stage builds (
#10832
)
2024-12-22 23:22:58 +01:00
llama-cli-cann.Dockerfile
docker: use GGML_NATIVE=OFF (
#10368
)
2024-11-18 00:21:53 +01:00
llama-cpp-cuda.srpm.spec
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (
#8139
)
2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (
#7809
)
2024-06-13 00:41:52 +01:00
musa.Dockerfile
devops : add docker-multi-stage builds (
#10832
)
2024-12-22 23:22:58 +01:00
rocm.Dockerfile
devops : add docker-multi-stage builds (
#10832
)
2024-12-22 23:22:58 +01:00
tools.sh
fix: graceful shutdown for Docker images (
#10815
)
2024-12-13 18:23:50 +01:00
vulkan.Dockerfile
docker: add missing vulkan library to base layer and update to 24.04 (
#11422
)
2025-01-26 18:22:43 +01:00