Satellite Backend Communication
DeployStack Satellite implements outbound-only HTTP polling communication with the Backend, following the GitHub Actions runner pattern for enterprise firewall compatibility. This document describes the communication implementation from the satellite perspective.
Communication Pattern
HTTP Polling Architecture
Satellites initiate all communication using outbound HTTPS requests:
Satellite Backend
│ │
│──── GET /commands ────────▶│ (Poll for pending commands)
│ │
│◀─── Commands Response ────│ (MCP server tasks)
│ │
│──── POST /heartbeat ──────▶│ (Report status, metrics)
│ │
│◀─── Acknowledgment ───────│ (Confirm receipt)Firewall Benefits:
- Works through corporate firewalls without inbound rules
- Functions behind network address translation (NAT)
- Supports corporate HTTP proxies
- No exposed satellite endpoints required
Adaptive Polling Strategy
Satellites adjust polling frequency based on Backend guidance:
- Immediate Mode: 2-second intervals when urgent commands pending
- Normal Mode: 30-second intervals for routine operations
- Backoff Mode: Exponential backoff up to 5 minutes on errors
- Maintenance Mode: Reduced polling during maintenance windows
Current Implementation
Phase 1: Basic Connection Testing ✅
The satellite currently implements basic Backend connectivity:
Environment Configuration:
# .env file
DEPLOYSTACK_BACKEND_URL=http://localhost:3000Backend Client Service:
- Connection testing with 5-second timeout
- Health endpoint validation at
/api/health - Structured error responses with timing metrics
- Last connection status and response time tracking
Fail-Fast Startup Logic:
const connectionStatus = await backendClient.testConnection();
if (connectionStatus.connection_status === 'connected') {
server.log.info('✅ Backend connection verified');
} else {
server.log.error('❌ Backend unreachable - satellite cannot start');
process.exit(1);
}Debug Endpoint:
GET /api/status/backend- Returns connection status for troubleshooting
Phase 2: Satellite Registration ✅
Satellite registration is now fully implemented with secure JWT-based token authentication preventing unauthorized satellite connections.
For complete registration documentation, see Satellite Registration. For backend token management details, see Registration Token Authentication.
Phase 3: Heartbeat Authentication ✅
API Key Authentication:
- Bearer token authentication implemented for heartbeat requests
- API key validation using argon2 hash verification
- Automatic key rotation on satellite re-registration
Heartbeat Implementation:
- 30-second interval heartbeat reporting
- System metrics collection (CPU, memory, uptime)
- Process status reporting (empty array for now)
- Authenticated communication with Backend
Phase 4: Command Polling ✅
Command Polling Implementation:
- Adaptive polling intervals based on command priorities
- Command queue processing with immediate, high, and normal priorities
- Status reporting and acknowledgment system
- Automatic polling mode switching based on pending commands
Priority-Based Polling:
immediatepriority commands trigger 2-second polling intervalshighpriority commands trigger 10-second polling intervalsnormalpriority commands trigger 30-second polling intervals- No pending commands default to 60-second polling intervals
Command Processing:
- MCP installation commands trigger configuration refresh
- MCP deletion commands trigger process cleanup
- System update commands trigger component updates
- Command completion reporting with correlation IDs
Communication Components
Command Polling
Scope-Aware Endpoints:
- Global Satellites:
/api/satellites/global/{satelliteId}/commands - Team Satellites:
/api/teams/{teamId}/satellites/{satelliteId}/commands
Polling Optimization:
X-Last-Pollheader for incremental updates- Backend-guided polling intervals
- Command priority handling
- Automatic retry with exponential backoff
Status Reporting
Heartbeat Communication:
- System metrics (CPU, memory, disk usage)
- Process status for all running MCP servers
- Network information and connectivity status
- Performance metrics and error counts
Command Result Reporting:
- Execution status and timing
- Process spawn results
- Error logs and diagnostics
- Correlation ID tracking for user feedback
Resource Management
System Resource Limits
Per-Process Limits:
- 0.1 CPU cores maximum per MCP server process
- 100MB RAM maximum per MCP server process
- 5-minute idle timeout for automatic cleanup
- Maximum 50 concurrent processes per satellite
Enforcement Methods:
- Linux cgroups v2 for CPU and memory limits
- Process monitoring with automatic termination
- Resource usage reporting to Backend
- Early warning at 80% resource utilization
Team Isolation
Process-Level Isolation:
- Dedicated system users per team (
satellite-team-123) - Separate process groups for complete isolation
- Team-specific directories and permissions
- Network namespace isolation (optional)
Resource Boundaries:
- Team-scoped resource quotas
- Isolated credential management
- Separate logging and audit trails
- Team-aware command filtering
MCP Server Management
Dual MCP Server Support
stdio Subprocess Servers:
- Local MCP servers as child processes
- JSON-RPC communication over stdio
- Process lifecycle management (spawn, monitor, terminate)
- Team isolation with dedicated system users
HTTP Proxy Servers:
- External MCP server endpoints
- Reverse proxy with load balancing
- Health monitoring and failover
- Request/response caching
Process Lifecycle
Spawn Process:
- Receive spawn command from Backend
- Validate team permissions and resource limits
- Create isolated process environment
- Start MCP server with stdio communication
- Report process status to Backend
Monitor Process:
- Continuous health checking
- Resource usage monitoring
- Automatic restart on failure
- Performance metrics collection
Terminate Process:
- Graceful shutdown with SIGTERM
- Force kill with SIGKILL after timeout
- Resource cleanup and deallocation
- Final status report to Backend
Internal Architecture
Five Core Components
1. HTTP Proxy Router
- Team-aware request routing
- OAuth 2.1 Resource Server integration
- Load balancing across MCP server instances
- Request/response logging for audit
2. MCP Server Manager
- Process lifecycle management
- stdio JSON-RPC communication
- Health monitoring and restart logic
- Resource limit enforcement
3. Team Resource Manager
- Linux namespaces and cgroups setup
- Team-specific user and directory creation
- Resource quota enforcement
- Credential injection and isolation
4. Backend Communicator
- HTTP polling with adaptive intervals
- Command queue processing
- Status and metrics reporting
- Configuration synchronization
5. Communication Manager
- stdio JSON-RPC protocol handling
- HTTP proxy request routing
- Session management and cleanup
- Error handling and recovery
Technology Stack
Core Technologies
HTTP Framework:
- Fastify with
@fastify/http-proxyfor reverse proxy - JSON Schema validation for all requests
- Pino structured logging
- TypeScript with full type safety
Process Management:
- Node.js
child_processfor MCP server spawning - stdio JSON-RPC communication
- Process monitoring with health checks
- Graceful shutdown handling
Security:
- OAuth 2.1 Resource Server for authentication
- Linux namespaces for process isolation
- cgroups v2 for resource limits
- Secure credential management
Development Setup
Local Development
# Clone and setup
git clone https://github.com/deploystackio/deploystack.git
cd deploystack/services/satellite
npm install
# Configure environment
cp .env.example .env
# Edit DEPLOYSTACK_BACKEND_URL and add DEPLOYSTACK_REGISTRATION_TOKEN
# Obtain registration token from backend admin interface first
# Start development server
npm run dev
# Server runs on http://localhost:3001Environment Configuration
# Required environment variables
DEPLOYSTACK_BACKEND_URL=http://localhost:3000
DEPLOYSTACK_SATELLITE_NAME=dev-satellite-001
DEPLOYSTACK_REGISTRATION_TOKEN=deploystack_satellite_global_eyJhbGc...
LOG_LEVEL=debug
PORT=3001
# Optional configuration
NODE_ENV=developmentNote: DEPLOYSTACK_REGISTRATION_TOKEN is only required for initial satellite pairing. Once registered, satellites use their permanent API keys for all communication.
Testing Backend Communication
# Test current connection
curl http://localhost:3001/api/status/backend
# Expected response
{
"backend_url": "http://localhost:3000",
"connection_status": "connected",
"response_time_ms": 45,
"last_check": "2025-01-05T10:30:00Z"
}Database Integration
The Backend maintains satellite state in five tables:
satellites- Satellite registry and configurationsatelliteCommands- Command queue managementsatelliteProcesses- Process status trackingsatelliteUsageLogs- Usage analytics and auditsatelliteHeartbeats- Health monitoring data
See services/backend/src/db/schema.sqlite.ts for complete schema definitions.
Security Implementation
Authentication Flow
Registration Phase:
- Admin generates JWT registration token via backend API
- Satellite includes token in Authorization header during registration
- Backend validates token signature, scope, and expiration
- Backend consumes single-use token and issues permanent API key
- Satellite stores API key securely for ongoing communication
For detailed token validation process, see Registration Security.
Operational Phase:
- All requests include
Authorization: Bearer {api_key} - Backend validates API key and satellite scope
- Team context extracted from satellite registration
- Commands filtered based on team permissions
Team Isolation Security
Process Security:
- Each team gets dedicated system user
- Process trees isolated with Linux namespaces
- File system permissions prevent cross-team access
- Network isolation optional for enhanced security
Credential Management:
- Team credentials injected into process environment
- No credential sharing between teams
- Secure credential storage and rotation
- Audit logging for all credential access
Monitoring and Observability
Structured Logging
Log Context:
server.log.info({
satelliteId: 'satellite-01',
teamId: 'team-123',
operation: 'mcp_server_spawn',
serverId: 'filesystem-server',
duration: '2.3s'
}, 'MCP server spawned successfully');Log Levels:
trace: Detailed communication flowsdebug: Development debugginginfo: Normal operationswarn: Resource limits, restartserror: Process failures, communication errorsfatal: Satellite crashes
Metrics Collection
System Metrics:
- CPU, memory, disk usage per satellite
- Process count and resource utilization
- Network connectivity and latency
- Error rates and failure patterns
Business Metrics:
- MCP tool usage per team
- Process spawn/termination rates
- Resource efficiency metrics
- User activity patterns
Implementation Status
Current Status:
- ✅ Basic Backend connection testing
- ✅ Fail-fast startup logic
- ✅ Debug endpoint for troubleshooting
- ✅ Environment configuration
- ✅ Satellite registration with upsert logic
- ✅ API key generation and management
- ✅ Bearer token authentication for requests
- ✅ Command polling loop with adaptive intervals
- ✅ Backend command creation system
- 🚧 Satellite command processing (in progress)
- 🚧 Process management (planned)
- 🚧 Team isolation (planned)
Next Milestones:
- Complete satellite command processing implementation
- Build MCP server process management
- Implement team isolation and resource limits
- Add comprehensive monitoring and alerting
- End-to-end testing and performance validation
The satellite communication system is designed for enterprise deployment with complete team isolation, resource management, and audit logging while maintaining the developer experience that defines the DeployStack platform.