server: (webui) add --webui-config (#18028)

* server/webui: add server-side WebUI config support

Add CLI arguments --webui-config (inline JSON) and --webui-config-file
(file path) to configure WebUI default settings from server side.

Backend changes:
- Parse JSON once in server_context::load_model() for performance
- Cache parsed config in webui_settings member (zero overhead on /props)
- Add proper error handling in router mode with try/catch
- Expose webui_settings in /props endpoint for both router and child modes

Frontend changes:
- Add 14 configurable WebUI settings via parameter sync
- Add tests for webui settings extraction
- Fix subpath support with base path in API calls

Addresses feedback from @ngxson and @ggerganov

* server: address review feedback from ngxson

* server: regenerate README with llama-gen-docs
This commit is contained in:
Pascal
2025-12-17 21:45:45 +01:00
committed by GitHub
parent e85e9d7637
commit 6ce3d85796
15 changed files with 163 additions and 27 deletions
+7 -1
View File
@@ -8,6 +8,7 @@
#include "log.h"
#include <atomic>
#include <exception>
#include <signal.h>
#include <thread> // for std::thread::hardware_concurrency
@@ -124,7 +125,12 @@ int main(int argc, char ** argv, char ** envp) {
std::optional<server_models_routes> models_routes{};
if (is_router_server) {
// setup server instances manager
models_routes.emplace(params, argc, argv, envp);
try {
models_routes.emplace(params, argc, argv, envp);
} catch (const std::exception & e) {
LOG_ERR("%s: failed to initialize router models: %s\n", __func__, e.what());
return 1;
}
// proxy handlers
// note: routes.get_health stays the same