The 2025 Nobel Prize in Chemistry for Metal-Organic Frameworks (MOFs) and recent breakthroughs by Huanting Wang's team at Monash University establish angstrom-scale channels as promising post-silicon substrates with native integrate-and-fire (IF) dynamics. However, utilizing these stochastic, analog materials for deterministic, bit-exact AI workloads (e.g., FP8) remains a paradox. Existing neuromorphic methods often settle for approximation, failing Transformer precision standards. To traverse the gap "from stochastic ions to deterministic floats," we propose a Native Spiking Microarchitecture. Treating noisy neurons as logic primitives, we introduce a Spatial Combinational Pipeline and a Sticky-Extra Correction mechanism. Validation across all 16,129 FP8 pairs confirms 100% bit-exact alignment with PyTorch. Crucially, our architecture reduces Linear layer latency to O(log N), yielding a 17x speedup. Physical simulations further demonstrate robustness against extreme membrane leakage (beta approx 0.01), effectively immunizing the system against the stochastic nature of the hardware.
翻译:2025年诺贝尔化学奖授予金属有机框架(MOFs)领域,以及莫纳什大学王焕庭团队的最新突破,确立了埃尺度通道作为具有原生积分-发放(IF)动力学的有前景的后硅基衬底。然而,利用这些随机、模拟材料实现确定性、比特精确的人工智能工作负载(如FP8)仍存在悖论。现有的神经形态计算方法往往满足于近似,无法达到Transformer的精度标准。为跨越“从随机离子到确定性浮点数”的鸿沟,我们提出了一种原生脉冲微架构。将噪声神经元视为逻辑基元,我们引入了空间组合流水线与粘滞额外校正机制。在所有16,129个FP8数对上的验证结果证实,与PyTorch实现了100%的比特精确对齐。关键的是,我们的架构将线性层延迟降低至O(log N),实现了17倍的加速。物理仿真进一步证明了该架构在极端膜泄漏(β约等于0.01)条件下的鲁棒性,有效免疫了硬件的随机特性。