Pseudorandom number generators (PRNGs) are ubiquitous in stochastic simulations and machine learning (ML), where they drive sampling, parameter initialization, regularization, and data shuffling. While widely used, the potential impact of PRNG statistical quality on computational results remains underexplored. In this study, we investigate whether differences in PRNG quality, as measured by standard statistical test suites, can influence outcomes in representative stochastic applications. Seven PRNGs were evaluated, ranging from low-quality linear congruential generators (LCGs) with known statistical deficiencies to high-quality generators such as Mersenne Twister, PCG, and Philox. We applied these PRNGs to four distinct tasks: an epidemiological agent-based model (ABM), two independent from-scratch MNIST classification implementations (Python/NumPy and C++), and a reinforcement learning (RL) CartPole environment. Each experiment was repeated 30 times per generator using fixed seeds to ensure reproducibility, and outputs were compared using appropriate statistical analyses. Results show that very poor statistical quality, as in the ''bad'' LCG failing 125 TestU01 Crush tests, produces significant deviations in ABM epidemic dynamics, reduces MNIST classification accuracy, and severely degrades RL performance. In contrast, mid-and good-quality LCGs-despite failing a limited number of Crush or BigCrush tests-performed comparably to top-tier PRNGs in most tasks, with the RL experiment being the primary exception where performance scaled with statistical quality. Our findings indicate that, once a generator meets a sufficient statistical robustness threshold, its family or design has negligible impact on outcomes for most workloads, allowing selection to be guided by performance and implementation considerations. However, the use of low-quality PRNGs in sensitive stochastic computations can introduce substantial and systematic errors.
翻译:伪随机数生成器在随机模拟与机器学习中无处不在,用于驱动采样、参数初始化、正则化及数据混洗。尽管应用广泛,PRNG统计质量对计算结果的潜在影响仍未得到充分探究。本研究探讨了通过标准统计测试套件衡量的PRNG质量差异是否会影响典型随机应用的结果。评估了七种PRNG,涵盖从具有已知统计缺陷的低质量线性同余生成器到高质量生成器(如Mersenne Twister、PCG和Philox)。我们将这些PRNG应用于四个独立任务:基于主体的流行病学模型、两个从头实现的MNIST分类(Python/NumPy和C++版本)以及强化学习CartPole环境。每个实验使用固定种子对每个生成器重复30次以确保可复现性,并通过适当的统计分析比较输出结果。结果表明,极差的统计质量(如未通过125项TestU01 Crush测试的“劣质”LCG)会导致ABM疫情动态显著偏离、降低MNIST分类准确率,并严重削弱RL性能。相比之下,中高质量LCG(尽管未通过少量Crush或BigCrush测试)在多数任务中与顶级PRNG表现相当,仅RL实验为主要例外——其性能随统计质量提升而改善。研究发现:当生成器达到足够的统计稳健性阈值后,其家族或设计对大多数工作负载结果的影响可忽略不计,选择时可优先考虑性能与实现因素。然而,在敏感的随机计算中使用低质量PRNG可能引入显著且系统性的误差。