Files
local_swarm/tests/test_federation_metrics.py
sleepy d33fa406b6 feat: CUDA/Android support and federation metrics (#7)
* optimize(federation): run local and peer generation in parallel

Previously, the federation waited for local generation to complete
before asking peers to generate. This wasted time since peers sat
idle while the host generated.

Now local swarm and all peers generate simultaneously:
- Fire local generation AND peer requests at the same time
- Wait for all to complete with asyncio.gather()
- Then run global consensus

This reduces total generation time from ~2x to ~1x when using
federation with multiple nodes.

Changes:
- Modified generate_with_federation() to run tasks in parallel
- Updated logging to reflect parallel execution
- Added proper error handling for local generation failures

* feat(federation): add federation support to streaming path

Previously, federation only worked with non-streaming requests.
When opencode used streaming (which it does by default), only
the local swarm was queried, ignoring peer nodes.

Now when federation is enabled and peers exist:
- Start federation generation in background (parallel)
- Stream from local swarm immediately
- Log federation results when complete

This enables federation to work with opencode and other
streaming clients while maintaining fast streaming response.

Also added webfetch instructions to prevent hallucinating URLs.

Changes:
- Modified streaming path to detect and use federation
- Added asyncio import
- Updated tool instructions to prevent URL hallucination

* fix(federation): wait for consensus and use federated result in streaming

Changed federation in streaming mode to:
- Wait for ALL nodes to complete generation
- Use the consensus result (not just local)
- Stream the federated response to client

This ensures voting from all nodes is properly considered.

Previous implementation streamed locally while federation ran
in background for logging only, which ignored the consensus.

* fix(federation): properly stream federated response

The federation case was setting the response but not returning
a StreamingResponse, so nothing was sent back to the client.

Added proper streaming generator for federation results that:
- Sends role chunk
- Streams content in chunks
- Sends final [DONE] chunk

This fixes the issue where opencode only saw local node output.

* feat(federation): add winner tracking and token usage reporting

- Track which node won the consensus voting (local or peer name)
- Add winner to FederationResult dataclass
- Log winner in server logs
- Calculate and report token usage in federation streaming
- Fix prompt_tokens calculation in streaming path

Now opencode will show:
- Context tokens used
- Which node won the vote (in logs)

* fix(federation): parse tool calls from federated response

Federation now properly handles tools:
- Removed 'not has_tools' condition so federation works with tools
- Added tool call parsing for federated responses
- Returns proper tool_calls delta with finish_reason=tool_calls
- Falls through to content streaming when no tool calls

This fixes opencode issue where federation was skipped
when tools were present.

* fix(federation): fix token count scope issue in generators

The async generators couldn't access the token count variables
because they were in the outer function scope. Fixed by:
- Calculating token counts inside each generator function
- Using separate local variable names to avoid scope issues
- Both tool_calls and content streaming now work correctly

* config(federation): increase peer timeout from 30s to 60s

Federation client timeout determines how long to wait for
peer responses before giving up and falling back to local result.

Changed from 30s to 60s to give peers more time to respond
especially on slower networks or machines.

* feat(federation): add CUDA/Android support and peer metrics tracking

Changes:
- GPU layer auto-configuration based on hardware detection
  - Offload all layers for Apple Silicon
  - Configure NVIDIA layers based on GPU count and compute capability
  - Add GPU device count and compute capability tracking

- Android platform detection
  - Detect Android via environment variables and file paths
  - Check /proc/sys/kernel/osrelease for kernel version
  - Normalize Android file paths (~ expansion, /sdcard alternatives)
  - Android-specific paths in hardware/qualcomm.py

- Federation metrics tracking
  - Add PeerMetrics dataclass with success rate, avg latency, error tracking
  - Track total requests, successful requests, failed requests
  - Record last error with timestamp
  - Add success_rate property (auto-calculated)

- Peer-specific timeout configuration
  - Add timeout_seconds to PeerInfo dataclass
  - Use peer-specific timeout in FederationClient requests
  - Use aiohttp.ClientTimeout for proper timeout handling
  - Track request start time for accurate latency calculation

- Comprehensive tests
  - test_hardware_detector.py: 14 test cases for GPU detection and Android
  - test_federation_metrics.py: 13 test cases for metrics and timeouts
  - All 35 tests pass (100% pass rate)

- Documentation
  - Add TODO.md with CUDA/Android implementation status
  - Document known issues and recommendations
  - Testing checklist and implementation priorities

Token impact: No prompt changes
Tests: 35/35 passing

Resolves federation timeout and observability issues.
2026-02-25 00:53:07 +01:00

167 lines
5.6 KiB
Python

"""Tests for federation metrics and peer timeout."""
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
import pytest
from datetime import datetime
from network.discovery import PeerInfo, PeerMetrics
from network.federation import FederationClient, PeerVote
class TestPeerMetrics:
"""Test peer metrics tracking."""
def test_peer_metrics_defaults(self):
"""Test default metric values."""
metrics = PeerMetrics()
assert metrics.total_requests == 0
assert metrics.successful_requests == 0
assert metrics.failed_requests == 0
assert metrics.success_rate == 1.0 # No requests = 100% success
def test_record_success(self):
"""Test recording successful requests."""
metrics = PeerMetrics()
metrics.record_success(100.0)
assert metrics.total_requests == 1
assert metrics.successful_requests == 1
assert metrics.failed_requests == 0
assert metrics.success_rate == 1.0
assert metrics.avg_latency_ms == 100.0
# Record another success
metrics.record_success(200.0)
assert metrics.total_requests == 2
assert metrics.avg_latency_ms == 150.0 # (100 + 200) / 2
def test_record_failure(self):
"""Test recording failed requests."""
metrics = PeerMetrics()
metrics.record_failure("Connection timeout")
assert metrics.total_requests == 1
assert metrics.successful_requests == 0
assert metrics.failed_requests == 1
assert metrics.success_rate == 0.0
assert metrics.last_error == "Connection timeout"
assert metrics.last_error_time is not None
def test_mixed_success_and_failure(self):
"""Test mixed success and failure recording."""
metrics = PeerMetrics()
metrics.record_success(100.0)
metrics.record_failure("Error")
metrics.record_success(150.0)
assert metrics.total_requests == 3
assert metrics.successful_requests == 2
assert metrics.failed_requests == 1
assert metrics.success_rate == 2/3
class TestPeerInfo:
"""Test PeerInfo with metrics and timeout."""
def test_peer_info_defaults(self):
"""Test PeerInfo default values."""
peer = PeerInfo(
host="192.168.1.100",
port=17615,
name="test-peer",
version="0.1.0",
instances=2,
model_id="qwen:7b:q4",
hardware_summary="Apple M1 Pro",
last_seen=datetime.now()
)
assert peer.timeout_seconds == 60.0 # Default timeout
assert peer.metrics is not None
assert isinstance(peer.metrics, PeerMetrics)
assert peer.api_url == "http://192.168.1.100:17615"
def test_peer_info_custom_timeout(self):
"""Test PeerInfo with custom timeout."""
peer = PeerInfo(
host="192.168.1.100",
port=17615,
name="slow-peer",
version="0.1.0",
instances=1,
model_id="test-model",
hardware_summary="CPU only",
last_seen=datetime.now(),
timeout_seconds=120.0 # Custom timeout
)
assert peer.timeout_seconds == 120.0
class TestFederationClient:
"""Test FederationClient with peer-specific timeouts."""
@pytest.fixture
def client(self):
return FederationClient(timeout=60.0)
@pytest.fixture
def fast_peer(self):
return PeerInfo(
host="192.168.1.10",
port=17615,
name="fast-peer",
version="0.1.0",
instances=2,
model_id="qwen:7b:q4",
hardware_summary="Apple M1 Max",
last_seen=datetime.now(),
timeout_seconds=30.0 # Fast peer with short timeout
)
@pytest.fixture
def slow_peer(self):
return PeerInfo(
host="192.168.1.11",
port=17615,
name="slow-peer",
version="0.1.0",
instances=1,
model_id="qwen:7b:q4",
hardware_summary="CPU only",
last_seen=datetime.now(),
timeout_seconds=90.0 # Slow peer with longer timeout
)
def test_peer_timeout_override(self, client, fast_peer, slow_peer):
"""Test that peer-specific timeout overrides default."""
# The client should use the peer's timeout, not the default
assert fast_peer.timeout_seconds == 30.0
assert slow_peer.timeout_seconds == 90.0
assert client.timeout == 60.0 # Default unchanged
def test_metrics_updated_on_success(self, fast_peer):
"""Test that metrics are updated on successful request."""
assert fast_peer.metrics.total_requests == 0
# Simulate recording a success (this would happen in request_vote)
fast_peer.metrics.record_success(150.0)
assert fast_peer.metrics.total_requests == 1
assert fast_peer.metrics.successful_requests == 1
assert fast_peer.metrics.success_rate == 1.0
def test_metrics_updated_on_failure(self, slow_peer):
"""Test that metrics are updated on failed request."""
assert slow_peer.metrics.total_requests == 0
# Simulate recording a failure
slow_peer.metrics.record_failure("Connection refused")
assert slow_peer.metrics.total_requests == 1
assert slow_peer.metrics.failed_requests == 1
assert slow_peer.metrics.success_rate == 0.0
assert slow_peer.metrics.last_error == "Connection refused"