This website requires JavaScript.
Explore
Help
Register
Sign In
sleepy
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
Code
Issues
11
Pull Requests
Actions
114
Packages
Projects
Releases
Wiki
Activity
Files
89daa2564f6eab33be53c6a1b39273af536d6bb3
llama.cpp
/
.devops
T
History
Georgi Gerganov
dbc2ec59b5
docker : drop to CUDA 12.4 (
#11869
)
...
* docker : drop to CUDA 12.4 * docker : update readme [no ci]
2025-02-14 14:48:40 +02:00
..
nix
nix: allow to override rocm gpu targets (
#10794
)
2024-12-14 10:17:36 -08:00
cloud-v-pipeline
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (
#7809
)
2024-06-13 00:41:52 +01:00
cpu.Dockerfile
docker : add GGML_CPU_ARM_ARCH arg to select ARM architecture to build for (
#11419
)
2025-01-25 17:22:41 +01:00
cuda.Dockerfile
docker : drop to CUDA 12.4 (
#11869
)
2025-02-14 14:48:40 +02:00
intel.Dockerfile
devops : add docker-multi-stage builds (
#10832
)
2024-12-22 23:22:58 +01:00
llama-cli-cann.Dockerfile
docker: use GGML_NATIVE=OFF (
#10368
)
2024-11-18 00:21:53 +01:00
llama-cpp-cuda.srpm.spec
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (
#8139
)
2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (
#7809
)
2024-06-13 00:41:52 +01:00
musa.Dockerfile
musa: bump MUSA SDK version to rc3.1.1 (
#11822
)
2025-02-13 13:28:18 +01:00
rocm.Dockerfile
devops : add docker-multi-stage builds (
#10832
)
2024-12-22 23:22:58 +01:00
tools.sh
docker: add perplexity and bench commands to full image (
#11438
)
2025-01-28 10:42:32 +00:00
vulkan.Dockerfile
ci : fix build CPU arm64 (
#11472
)
2025-01-29 00:02:56 +01:00