The work establishes the exact performance limits of stochastic coded caching when users share a bounded number of cache states, and when the association between users and caches, is random. Under the premise that more balanced user-to-cache associations perform better than unbalanced ones, our work provides a statistical analysis of the average performance of such networks, identifying in closed form, the exact optimal average delivery time. To insightfully capture this delay, we derive easy to compute closed-form analytical bounds that prove tight in the limit of a large number $\Lambda$ of cache states. In the scenario where delivery involves $K$ users, we conclude that the multiplicative performance deterioration due to randomness -- as compared to the well-known deterministic uniform case -- can be unbounded and can scale as $\Theta\left( \frac{\log \Lambda}{\log \log \Lambda} \right)$ at $K=\Theta\left(\Lambda\right)$, and that this scaling vanishes when $K=\Omega\left(\Lambda\log \Lambda\right)$. To alleviate this adverse effect of cache-load imbalance, we consider various load balancing methods, and show that employing proximity-bounded load balancing with an ability to choose from $h$ neighboring caches, the aforementioned scaling reduces to $\Theta \left(\frac{\log(\Lambda / h)}{ \log \log(\Lambda / h)} \right)$, while when the proximity constraint is removed, the scaling is of a much slower order $\Theta \left( \log \log \Lambda \right)$. The above analysis is extensively validated numerically.
翻译:当用户共享缓存状态的固定数量时, 当用户和缓存状态之间的关联是随机的时, 工作可以建立缓存状态的精确性能限制 。 在更平衡的用户到缓存的关联比不平衡的组合表现更好这一前提下, 我们的工作可以提供对这些网络平均性能的统计分析, 以封闭的形式识别, 准确的最佳平均交付时间 。 要有洞察地捕捉这种延迟, 我们很容易地计算在大量缓存状态的 $( Lambda) 的限度内, 并且当交付涉及到 $( K$) 的用户时, 我们的结论是, 由于随机性( 与众所周知的确定性统一案例相比) 导致的倍增性性性性性性能恶化可以不受约束, 以$( Theta) left (\ revalent )\ Lambda\ lambda\\\\\ 右) 右, 以 rickal mailate ral maxal max, max max lex lex lex max mail max max max max max max max max max le max max max max max max max max max max max max maxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx