Summary: | Computation caching is a novel strategy to improve the performance of computation offloading in wireless networks endowed with edge cloud or fog computing capabilities. It consists in preemptively storing in caches located at the edge of the network the results of computations that users offload to the edge cloud. The goal is to avoid redundant and repetitive processing of the same tasks, thus streamlining the offloading process and improving the exploitation of both the users' and the network's resources. In this paper, a novel computation caching policy is defined, investigated, and benchmarked against state-of-the-art solutions. The proposed new policy is built on three characterizing parameters of offloadable computational tasks: popularity, input size, and output size. This work proves the crucial importance of including these parameters altogether in the design of efficient policies. Our proposed policy has low computational complexity and is numerically shown to achieve optimality for several performance indicators and to yield significantly better results compared to the other analyzed policies. This is shown in both a single-and a multi-cell scenario, where a serving small cell has access to its neighboring cells' caches via backhaul. In this paper, the benefits of computation caching are highlighted and estimated through extensive numerical simulations in terms of reduction of uplink traffic, communication and computation costs, offloading delay, and computational resource outage.
|