With the popularity of video sharing applications and video conferencing systems, there has been a growth of interest to measure and enhance the quality of videos captured and transmitted by those applications. While assessing the quality of UGC videos itself is still an open question, it is even challenging to enhance the perceptual quality of UGC videos with unknown characteristics. In this work, we study the potential to enhance the quality of UGC videos by increasing the sharpen effects. To this end, we construct a subjective dataset by conducting a massive online crowdsourcing. The dataset consists of 1200 sharpness enhanced UGC videos processed from 200 UGC source videos. During subjective test, each processed video is compared with its source to capture finegrained quality difference. We propose a statistical model to precisely measure whether the quality is enhanced or degraded. Moreover, we benchmark state-of-the-art No-Reference image or video quality metrics with the collected subjective data. It is observed that most metrics do not correlate well with subjective score. This indicates the need to develop more reliable objective metrics for UGC videos.
The video quality assessment (VQA) technology has attracted a lot of attention in recent years due to an increasing demand of video streaming services. Existing VQA methods are designed to predict video quality in terms of the mean opinion score (MOS) calibrated by humans in subjective experiments. However, they cannot predict the satisfied user ratio (SUR) of an aggregated viewer group. Furthermore, they provide little guidance to video coding parameter selection, e.g. the Quantization Parameter (QP) of a set of consecutive frames, in practical video streaming services. To overcome these shortcomings, the just-noticeable-difference (JND) based VQA methodology has been proposed as an alternative. It is observed experimentally that the JND location is a normally distributed random variable. In this work, we explain this distribution by proposing a user model that takes both subject variabilities and content variabilities into account. This model is built upon user’s capability to discern the quality difference between video clips encoded with different QPs. Moreover, it analyzes video content characteristics to account for inter-content variability. The proposed user model is validated on the data collected in the VideoSet. It is demonstrated that the model is flexible to predict SUR distribution of a specific user group.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.