Summary: | There are few network resources in wireless multimedia sensor networks (WMSNs). Compressing media data can reduce the reliance of user’s Quality of Experience (QoE) on network resources. Existing video coding software, such as H.264 and H.265, focuses only on spatial and short-term information redundancy. However, video usually contains redundancy over a long period of time. Therefore, compressing video information redundancy with a long period of time without compromising the user experience and adaptive delivery is a challenge in WMSNs. In this paper, a semantic-aware super-resolution transmission for adaptive video streaming system (SASRT) for WMSNs is presented. In the SASRT, some deep learning algorithms are used to extract video semantic information and enrich the video quality. On the multimedia sensor, different bit-rate semantic information and video data are encoded and uploaded to user. Semantic information can also be identified on the user side, further reducing the amount of data that needs to be transferred. However, identifying semantic information on the user side may increase the computational cost of the user side. On the user side, video quality is enriched with super-resolution technologies. The major challenges faced by SASRT include where the semantic information is identified, how to choose the bit rates of semantic and video information, and how network resources should be allocated to video and semantic information. The optimization problem is formulated as a complexity-constrained nonlinear NP-hard problem. Three adaptive strategies and a heuristic algorithm are proposed to solve the optimization problem. Simulation results demonstrate that SASRT can compress video information redundancy with a long period of time effectively and enrich the user experience with limited network resources while simultaneously improving the utilization of these network resources.
|