# OpenAI GPT Models Feedback for Hermes Agent **Source reference:** Official docs, community discussions, blog posts --- ## Supported Models Hermes Agent supports OpenAI models including: - GPT-4o / GPT-4o-mini - GPT-5 series (via API) - o1 / o3 (reasoning models) - Codex models (with special OAuth handling) --- ## Codex Integration **Special Feature:** When using Anthropic OAuth through `hermes model`, Hermes prefers Claude Code's own credential store over copying the token into `~/.hermes/.env`. This keeps refreshable credentials working properly. **Copilot Alternative:** ``` copilot — Direct Copilot API (recommended) ``` Uses your GitHub Copilot subscription to access GPT-5.x, Claude, Gemini, and other models through the Copilot API. --- ## Auxiliary Vision Configuration **Recommended setup for GPT-4o vision:** ```yaml auxiliary: vision: provider: "openrouter" model: "openai/gpt-4o" ``` **Using Codex OAuth (ChatGPT Pro/Plus):** ```yaml auxiliary: vision: provider: "codex" # uses your ChatGPT OAuth token # model defaults to gpt-5.3-codex (supports vision) ``` --- ## Token Overhead Same 13.9K fixed overhead applies to OpenAI models: | Component | Tokens | |-----------|--------| | Tool definitions | 8,759 | | System prompt | 5,176 | | **Fixed overhead** | **~13,935** | --- ## Community Feedback ### Positive - Reliable tool calling - Good for complex reasoning tasks - Widely tested and supported ### Cost Considerations - GPT-4 class models are expensive for high-volume usage - Consider using budget models (GPT-4o-mini) for simpler tasks - Token overhead adds significant cost multiplier --- ## Provider Agnostic Design Hermes allows easy switching between providers: ```bash hermes model # Interactive provider selection ``` **Switch without code changes:** - OpenAI → Anthropic → Local models - No configuration file editing required - API keys stored securely in `~/.hermes/.env` --- ## Recommendations 1. **Use GPT-4 class models** for complex architecture decisions 2. **Use GPT-4o-mini** for routine tasks to reduce costs 3. **Enable response caching** when available 4. **Monitor token usage** with `/usage` command 5. **Consider OpenRouter** for flexibility across multiple frontier models