Backend Polling Implementation
The DeployStack Satellite implements a sophisticated HTTP polling system for outbound-only communication with the backend. This firewall-friendly approach enables command orchestration, configuration synchronization, and status reporting without requiring inbound connections to the satellite.
Polling Architecture
Core Components
The polling system consists of four integrated services:
┌─────────────────────────────────────────────────────────────────────────────────┐
│ Satellite Polling Architecture │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Command Polling │ │ Dynamic Config │ │ Command │ │
│ │ Service │ │ Manager │ │ Processor │ │
│ │ │ │ │ │ │ │
│ │ • Adaptive Poll │ │ • MCP Server │ │ • HTTP Proxy │ │
│ │ • Command Queue │ │ Config Sync │ │ Management │ │
│ │ • Error Backoff │ │ • Validation │ │ • Health Checks │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
│ │ Heartbeat Service │ │
│ │ │ │
│ │ • Process Status Reporting • System Metrics Collection │ │
│ │ • 30-second Intervals • Error Count Tracking │ │
│ └─────────────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────────┘Service Integration Flow
Startup → Registration → Polling Start → Configuration Sync → Command Processing
│ │ │ │ │
Backend API Key Poll Timer MCP Servers HTTP Proxy
Connect Received Started Updated ReadyCommand Polling Service
Adaptive Polling Strategy
The polling service implements priority-based polling with automatic mode transitions:
Immediate Mode (2 seconds):
- Activated when
immediatepriority commands are pending - Used for MCP installations and critical updates
- Enables 3-second end-to-end response time goal
- Automatically returns to normal mode when immediate commands are processed
High Priority Mode (10 seconds):
- Activated when
highpriority commands are pending - Used for MCP deletions and configuration changes
- Balances urgency with resource efficiency
Normal Mode (30 seconds):
- Activated when only
normalpriority commands are pending - Used for routine maintenance and non-urgent tasks
- Default polling interval for steady-state operation
Slow Mode (60 seconds):
- Used when no commands are pending
- Minimizes backend load during idle periods
- Automatically switches to faster modes when commands arrive
Error Mode (exponential backoff):
- Activated when polling requests fail
- Starts at current interval, doubles on each failure
- Maximum backoff of 300 seconds (5 minutes)
- Resets to appropriate priority mode on successful poll
Polling Implementation
class CommandPollingService {
private currentPollingMode: 'immediate' | 'normal' | 'error' = 'normal';
private currentInterval: number = 30; // seconds
private async pollForCommands(): Promise<void> {
const queryParams = new URLSearchParams();
queryParams.set('last_poll', this.lastPollTime.toISOString());
queryParams.set('limit', '10');
const response = await fetch(
`${backendUrl}/api/satellites/${satelliteId}/commands?${queryParams}`,
{
headers: { 'Authorization': `Bearer ${apiKey}` },
signal: AbortSignal.timeout(15000)
}
);
const pollResponse = await response.json();
this.updatePollingStrategy(
pollResponse.polling_mode,
pollResponse.next_poll_interval
);
}
}Command Processing Pipeline
Commands flow through a structured processing pipeline:
- Command Validation: Payload validation and format checking
- Command Routing: Route to appropriate processor based on command type
- Execution: Process command with error handling and timeout
- Result Reporting: Send execution results back to backend
Supported Command Types:
configure- Update MCP server configurationspawn- Start HTTP MCP server proxykill- Stop HTTP MCP server proxyrestart- Restart HTTP MCP server proxyhealth_check- Perform health checks on all servers
Dynamic Configuration Management
Configuration Sync Process
The satellite replaces hardcoded MCP server configurations with dynamic updates from the backend:
interface ConfigurationUpdate {
mcp_servers: Record<string, McpServerConfig>;
polling_intervals?: {
normal: number;
immediate: number;
error_backoff_max: number;
};
resource_limits?: {
max_processes: number;
max_memory_per_process: string;
};
}Configuration Validation
All incoming configurations undergo strict validation:
Server Configuration Validation:
- URL format validation using
new URL() - Server type restriction to 'http' only
- Timeout value validation (positive numbers)
- Required field presence checking
Configuration Change Detection:
- Deep comparison of server configurations
- Identification of added, removed, and modified servers
- Structured logging of all configuration changes
Integration with Existing Services
Configuration updates trigger cascading updates across satellite services:
Config Update → Dynamic Config Manager → HTTP Proxy Manager → Tool Discovery Manager
│ │ │ │
Validate Apply Changes Re-initialize Rediscover Tools
Changes Update Cache Proxy Routes Update CacheHTTP Proxy Management Integration
Dynamic Server Registration
The HTTP Proxy Manager integrates with the dynamic configuration system:
class HttpProxyManager {
private configManager?: DynamicConfigManager;
async handleConfigurationUpdate(config: DynamicMcpServersConfig): Promise<void> {
// Re-initialize proxy routes with new server configurations
await this.initialize();
}
}Server Health Monitoring
The command processor implements health checking for HTTP MCP servers:
private async checkServerHealth(processInfo: ProcessInfo): Promise<HealthResult> {
const response = await fetch(serverConfig.url, {
method: 'POST',
headers: { 'Content-Type': 'application/json', ...serverConfig.headers },
body: JSON.stringify({
jsonrpc: '2.0',
id: 'health-check',
method: 'tools/list',
params: {}
}),
signal: AbortSignal.timeout(serverConfig.timeout || 10000)
});
return {
health_status: response.ok ? 'healthy' : 'unhealthy',
response_time_ms: Date.now() - startTime
};
}Tool Discovery Integration
Dynamic Tool Rediscovery
The Remote Tool Discovery Manager integrates with configuration updates:
class RemoteToolDiscoveryManager {
async handleConfigurationUpdate(config: DynamicMcpServersConfig): Promise<void> {
// Reset and rediscover tools from updated server configurations
this.isInitialized = false;
this.cachedTools = [];
await this.initialize();
}
}Tool Cache Management
Tool discovery maintains an in-memory cache that updates when server configurations change:
- Cache Invalidation: Complete cache reset on configuration changes
- Namespace Preservation: Tools maintain server-prefixed naming
- Error Resilience: Failed discoveries don't block other servers
- Performance Optimization: Memory-only storage for fast access
Error Handling and Recovery
Polling Error Recovery
The polling service implements comprehensive error handling:
Network Errors:
- Automatic retry with exponential backoff
- Maximum backoff limit of 300 seconds
- Connection timeout handling (15 seconds)
- Graceful degradation on persistent failures
Authentication Errors:
- 401 Unauthorized handling
- API key validation logging
- Structured error reporting to logs
Configuration Errors:
- Invalid server configuration rejection
- Partial configuration application
- Rollback to previous working configuration
Command Execution Error Handling
Command processing includes robust error handling:
async processCommand(command: SatelliteCommand): Promise<CommandResult> {
try {
// Execute command with timeout and error handling
const result = await this.executeCommand(command);
return { command_id: command.id, status: 'completed', result };
} catch (error) {
return {
command_id: command.id,
status: 'failed',
error: error.message
};
}
}Heartbeat Integration
Process Status Reporting
The heartbeat service integrates with the command processor to report process status:
class HeartbeatService {
setCommandProcessor(commandProcessor: CommandProcessor): void {
this.commandProcessor = commandProcessor;
}
private async sendHeartbeat(): Promise<void> {
const processes = this.commandProcessor ?
this.commandProcessor.getAllProcesses() : [];
const payload = {
status: 'active',
system_metrics: await this.collectSystemMetrics(),
processes: processes,
error_count: 0,
version: '0.1.0'
};
}
}System Metrics Collection
Current system metrics include:
- Memory Usage: Node.js heap usage in MB
- Process Uptime: Satellite process uptime in seconds
- Process Count: Number of managed HTTP proxy processes
- Error Count: Recent error count for health assessment
Development Integration
Service Initialization Order
The polling system requires specific initialization order:
// 1. Backend connection and registration
const backendClient = new BackendClient(backendUrl, logger);
await backendClient.testConnection();
const registration = await backendClient.registerSatellite(data);
// 2. Configuration and processing services
const dynamicConfigManager = new DynamicConfigManager(logger);
const commandProcessor = new CommandProcessor(logger, dynamicConfigManager);
// 3. HTTP proxy and tool discovery with config integration
const httpProxyManager = new HttpProxyManager(server, logger);
httpProxyManager.setConfigManager(dynamicConfigManager);
const toolDiscoveryManager = new RemoteToolDiscoveryManager(logger);
toolDiscoveryManager.setConfigManager(dynamicConfigManager);
// 4. Polling service with handlers
const commandPollingService = new CommandPollingService(satelliteId, backendClient, logger);
commandPollingService.setConfigurationUpdateHandler(handleConfigUpdate);
commandPollingService.setCommandHandler(handleCommand);
commandPollingService.start();Environment Configuration
Polling behavior is controlled by environment variables:
# Backend connection
DEPLOYSTACK_BACKEND_URL=http://localhost:3000
# Satellite identification
DEPLOYSTACK_SATELLITE_NAME=dev-satellite-001
# Logging level affects polling debug output
LOG_LEVEL=debugPerformance Characteristics
Polling Efficiency
The adaptive polling strategy optimizes resource usage:
- Normal Operations: 30-second intervals minimize backend load
- Immediate Response: 2-second intervals for urgent commands
- Error Backoff: Exponential backoff prevents cascade failures
- Network Optimization: Query parameters reduce response size
Memory Usage
The polling system maintains minimal memory footprint:
- Configuration Cache: ~1KB per MCP server configuration
- Command Queue: Temporary storage for pending commands
- Tool Cache: ~1KB per discovered tool
- Process Tracking: Minimal metadata per HTTP proxy process
Network Traffic
Polling generates predictable network patterns:
- Command Polling: Small JSON requests every 30 seconds (normal mode)
- Configuration Sync: Infrequent larger payloads on configuration changes
- Heartbeats: Regular status reports every 30 seconds
- Command Results: Small JSON responses after command execution
Implementation Status: The polling system is fully implemented and operational. It successfully handles command orchestration, configuration synchronization, and status reporting through outbound-only HTTP communication with the backend.
Troubleshooting
Common Issues
401 Unauthorized Errors:
- Indicates missing backend endpoints for satellite management
- Expected during development when backend endpoints are not implemented
- Satellite continues normal operation, polling will succeed once endpoints exist
Configuration Validation Failures:
- Check server URL format and accessibility
- Verify server type is set to 'http'
- Ensure timeout values are positive numbers
Polling Failures:
- Check backend connectivity and availability
- Verify satellite API key is valid
- Monitor exponential backoff behavior in logs
Debug Logging
Enable debug logging to monitor polling behavior:
LOG_LEVEL=debug npm run devDebug logs include:
- Polling attempt details and timing
- Configuration update processing
- Command execution results
- Error handling and recovery actions