Fix model references and benchmark data across all feedback files

Qwen Model Corrections:
- Added Model Reference Guide to clarify Qwen3 vs Qwen 3.5 families
- Qwen3: 0.6B, 1.7B, 4B, 8B, 14B, 32B + MoE (30B-A3B, 235B-A22B)
- Qwen 3.5: 0.8B, 2B, 4B, 9B + MoE (27B, 122B-A10B, 397B-A17B)
- Fixed 'Qwen3.5-35B-A3B' -> 'Qwen3-30B-A3B' (non-existent model corrected)
- Note: Qwen 3.5 14B does NOT exist; references likely mean Qwen3-14B

Terminal-Bench 2.0 Fixes:
- Clarified that Terminal-Bench measures HARNESS+MODEL combinations
- Updated rankings with current leaderboard data (April 2026):
  - #1: Pilot + Claude Opus 4.6: 82.9%
  - #2: ForgeCode + GPT-5.4: 81.8%
  - #3: ForgeCode + Claude Opus 4.6: 81.8%
- Removed incorrect 'GPT-5.4 Rank #1' claims (scores vary by harness)
- Added harness attribution to all Terminal-Bench references

SWE-Bench Pro Updates (Verified):
- #1: Claude Mythos Preview: 77.8%
- #2: GLM-5.1: 58.4% (top open-source)
- #3: GPT-5.4: 57.7%
- Added source references to llm-stats.com

Files Modified:
- forgecode/feedback/localllm/qwen-3.5.md
- forgecode/feedback/frontier/benchmark-controversy.md
- hermes/feedback/localllm/qwen-models-feedback.md
- opencode/opencode/feedback/SUMMARY.md
- opencode/opencode/feedback/frontier/frontier-model-feedback.md
- opencode/opencode/feedback/localllm/local-llm-feedback.md
- pi/feedback/frontier/frontier-model-feedback.md
This commit is contained in:
2026-04-09 16:05:14 +02:00
parent 2623737ad2
commit f561bed731
7 changed files with 141 additions and 67 deletions
+16 -4
View File
@@ -1,6 +1,6 @@
# Qwen 3.5 with ForgeCode - Feedback Report
# Qwen Models with ForgeCode - Feedback Report
**Model:** Qwen 3.5
**Models Covered:** Qwen 3.5, Qwen3
**Provider:** Alibaba Cloud (via local inference)
**Harness:** ForgeCode
**Source References:** GitHub Issue #2894, Reddit r/LocalLLaMA
@@ -8,12 +8,24 @@
---
## Model Reference Guide
| Model Family | Available Sizes | Notes |
|--------------|-----------------|-------|
| **Qwen 3.5** | 0.8B, 2B, 4B, 9B (dense); 27B, 122B-A10B, 397B-A17B (MoE) | Released Feb 2026 |
| **Qwen3** | 0.6B, 1.7B, 4B, 8B, 14B, 32B (dense); 30B-A3B, 235B-A22B (MoE) | Released April 2025 |
| **Qwen2.5** | 0.5B, 1.5B, 3B, 7B, 14B, 32B, 72B + Coder variants | Earlier generation |
> **Note:** References to "Qwen 3.5 14B" in community discussions likely mean Qwen3-14B or Qwen2.5-14B.
---
## Known Issues
### Multiple System Messages Bug
**GitHub Issue:** #2894 (Open as of April 8, 2026)
**Problem:** Multiple system messages break models with strict chat templates (e.g., Qwen3.5)
**Problem:** Multiple system messages break models with strict chat templates (e.g., Qwen3, Qwen 3.5)
**Error Manifestation:**
- Models with strict chat templates fail to parse message structure correctly
@@ -22,7 +34,7 @@
**Impact:**
- Affects local inference with llama.cpp, Ollama, and similar servers
- Qwen3.5 specifically mentioned as affected
- Qwen3 and Qwen 3.5 specifically mentioned as affected
**Workaround Status:** No official fix yet; issue under investigation