Spiking neural networks offer event-driven computation suited to time-critical networking tasks such as anomaly detection, local routing control, and congestion management at the edge. Classical units, including Hodgkin-Huxley, Izhikevich, and the Random Neural Network, map poorly to these needs. We introduce Network-Optimised Spiking (NOS), a compact two-variable unit whose state encodes normalised queue occupancy and a recovery resource. The model uses a saturating nonlinearity to enforce finite buffers, a service-rate leak, and graph-local inputs with delays and optional per link gates. It supports two differentiable reset schemes for training and deployment. We give conditions for equilibrium existence and uniqueness, local stability tests from the Jacobian trace and determinant, and a network threshold that scales with the Perron eigenvalue of the coupling matrix. The analysis yields an operational rule g* ~ k* rho(W) linking damping and offered load, shows how saturation enlarges the stable region, and explains finite-size smoothing of synchrony onsets. Stochastic arrivals follow a Poisson shot-noise model aligned with telemetry smoothing. Against queueing baselines, NOS matches M/M/1 mean by calibration while truncating deep tails under bursty input. In closed loop it gives, low-jitte with short settling. In zero-shot, label-free forecasting NOS is calibrated per node from arrival statistics. Its NOS dynamics yield high AUROC/AUPRC, enabling timely detection of congestion onsets with few false positives. Under a train-calibrated residual protocol across chain, star, and scale-free topologies, NOS improves early-warning F1 and detection latency over MLP, RNN, GRU, and tGNN. We provide guidance for data-driven initialisation, surrogate-gradient training with a homotopy on reset sharpness, and explicit stability checks with topology-aware bounds for resource constrained deployments.
翻译:暂无翻译