Improving Quality of Experience for HTTP Adaptive Video Streaming: From Legacy to 360° Videos
Taghavi Nasrabadi, Afshin
MetadataShow full item record
Video streaming is driving the growth in Internet traffic. HTTP Adaptive Streaming (HAS) is the widely adopted solution for this purpose. When network conditions are highly variable, users experience low video quality, playback freezes, and frequent quality switches. These factors result in poor Quality of Experience (QoE). It has been shown that Scalable Video Coding (SVC), with its layered encoding of video, can reduce the occurrence of rebufferings under dynamic network conditions. However, SVC introduces at least 10% overhead on video bitrate per layer and increases the number of HTTP requests to fetch the video. We propose a hybrid adaptation method, called LAAVS, that employs both SVC and regular encoded video to improve user QoE while mitigating the bandwidth and HTTP signaling overheads of SVC. Experimental results using real-world bandwidth traces show that LAAVS can perform better than state-of-the-art HAS solutions. LAAVS uses enhancement layers occasionally to reduce rebuffering while streaming high quality video. Recently, 360◦ video has gained significant popularity; however, streaming 360◦ video is challenging due to high bandwidth and low latency requirements. Adaptive 360◦ video streaming solutions reduce bandwidth consumption by streaming high quality video only inside user’s viewport. These solutions result in shallow buffers because of the limited accuracy of viewport prediction methods. When the client’s buffer is shallow, video playback is prone to stalls, resulting in reduced viewer QoE. We propose using SVC for 360◦ video streaming to improve QoE by reducing the probability of video stalls. Additionally, our method reduces the storage requirements on the server side and can improve in-network caching performance. Efficiency of 360◦ video streaming solutions depends on the accuracy of the viewport prediction method that is employed. Prediction accuracy of existing methods decreases drastically for prediction horizon longer than 1 second. To design a better prediction method it is important to study viewers’ behavior. We propose a taxonomy for 360◦ videos that categorizes videos based on moving objects and camera motion. Employing an IRB-approved human-subjects study, we gathered and analyzed viewport patterns for each category. We then designed a viewport prediction method based on the history of previous viewers. This method improves prediction accuracy for longer prediction horizons.