Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory Tradeoff

An emerging trend of next generation communication systems is to deploy caches at network edges to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided broadcast-relay wireless network consisting of one central base station, $M$ cache-equipped tr...

Full description

Bibliographic Details
Main Authors: Jaber Kakar, Alaa Alameer Ahmad, Anas Chaaban, Aydin Sezgin, Arogyaswami Paulraj
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8732315/
id doaj-b58ea064baf74584942d39d1628769c2
record_format Article
spelling doaj-b58ea064baf74584942d39d1628769c22021-03-29T23:02:10ZengIEEEIEEE Access2169-35362019-01-017768337685810.1109/ACCESS.2019.29212438732315Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory TradeoffJaber Kakar0https://orcid.org/0000-0003-1468-5250Alaa Alameer Ahmad1https://orcid.org/0000-0002-0764-5560Anas Chaaban2https://orcid.org/0000-0002-8713-5084Aydin Sezgin3Arogyaswami Paulraj4Faculty of Electrical Engineering, Ruhr-University Bochum, Bochum, GermanyFaculty of Electrical Engineering, Ruhr-University Bochum, Bochum, GermanySchool of Engineering, University of British Columbia, Kelowna, CanadaFaculty of Electrical Engineering, Ruhr-University Bochum, Bochum, GermanyDepartment of Electrical Engineering, Information Systems Laboratory, Stanford University, CA, USAAn emerging trend of next generation communication systems is to deploy caches at network edges to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided broadcast-relay wireless network consisting of one central base station, $M$ cache-equipped transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency at high signal-to-noise ratio (SNR). The objective is to jointly design the cache placement and file delivery in order to minimize the NDT. To this end, we establish two converse results and two achievability schemes. The first converse result is restricted to one-shot delivery schemes, while the second excludes this restriction. Similarly, the first achievable scheme is a general NDT-optimal one-shot scheme that synergistically exploits both multicasting and distributed zero-forcing opportunities. With respect to the second converse result, this scheme performs well for various parameter settings, particularly at higher cache sizes. The second scheme, effective at lower cache sizes, designs beamformers to facilitate both subspace interference alignment and zero-forcing. Exploiting both schemes, we are able to characterize the optimal tradeoff between cache storage and latency in networks satisfying $K+M\leq 4$ . The tradeoff illustrates that the NDT is the preferred choice to capture the latency of a system rather than the commonly used sum degrees of freedom (DoF). In fact, our optimal tradeoff refutes the popular belief that increasing cache sizes translates to increasing the achievable sum DoF. As such, we discuss cases where increasing cache sizes decreases both the delivery time and the achievable DoF.https://ieeexplore.ieee.org/document/8732315/Cachinginterference alignmentdegrees-of-freedomlatencydelivery time
collection DOAJ
language English
format Article
sources DOAJ
author Jaber Kakar
Alaa Alameer Ahmad
Anas Chaaban
Aydin Sezgin
Arogyaswami Paulraj
spellingShingle Jaber Kakar
Alaa Alameer Ahmad
Anas Chaaban
Aydin Sezgin
Arogyaswami Paulraj
Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory Tradeoff
IEEE Access
Caching
interference alignment
degrees-of-freedom
latency
delivery time
author_facet Jaber Kakar
Alaa Alameer Ahmad
Anas Chaaban
Aydin Sezgin
Arogyaswami Paulraj
author_sort Jaber Kakar
title Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory Tradeoff
title_short Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory Tradeoff
title_full Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory Tradeoff
title_fullStr Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory Tradeoff
title_full_unstemmed Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory Tradeoff
title_sort cache-assisted broadcast-relay wireless networks: a delivery-time cache-memory tradeoff
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2019-01-01
description An emerging trend of next generation communication systems is to deploy caches at network edges to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided broadcast-relay wireless network consisting of one central base station, $M$ cache-equipped transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency at high signal-to-noise ratio (SNR). The objective is to jointly design the cache placement and file delivery in order to minimize the NDT. To this end, we establish two converse results and two achievability schemes. The first converse result is restricted to one-shot delivery schemes, while the second excludes this restriction. Similarly, the first achievable scheme is a general NDT-optimal one-shot scheme that synergistically exploits both multicasting and distributed zero-forcing opportunities. With respect to the second converse result, this scheme performs well for various parameter settings, particularly at higher cache sizes. The second scheme, effective at lower cache sizes, designs beamformers to facilitate both subspace interference alignment and zero-forcing. Exploiting both schemes, we are able to characterize the optimal tradeoff between cache storage and latency in networks satisfying $K+M\leq 4$ . The tradeoff illustrates that the NDT is the preferred choice to capture the latency of a system rather than the commonly used sum degrees of freedom (DoF). In fact, our optimal tradeoff refutes the popular belief that increasing cache sizes translates to increasing the achievable sum DoF. As such, we discuss cases where increasing cache sizes decreases both the delivery time and the achievable DoF.
topic Caching
interference alignment
degrees-of-freedom
latency
delivery time
url https://ieeexplore.ieee.org/document/8732315/
work_keys_str_mv AT jaberkakar cacheassistedbroadcastrelaywirelessnetworksadeliverytimecachememorytradeoff
AT alaaalameerahmad cacheassistedbroadcastrelaywirelessnetworksadeliverytimecachememorytradeoff
AT anaschaaban cacheassistedbroadcastrelaywirelessnetworksadeliverytimecachememorytradeoff
AT aydinsezgin cacheassistedbroadcastrelaywirelessnetworksadeliverytimecachememorytradeoff
AT arogyaswamipaulraj cacheassistedbroadcastrelaywirelessnetworksadeliverytimecachememorytradeoff
_version_ 1724190249478782976