Unsupervised domain adaptation (UDA) has been widely used to transfer knowledge from a labeled source domain to an unlabeled target domain to counter the difficulty of labeling in a new domain. The training of conventional solutions usually relies on the existence of both source and target domain data. However, privacy of the large-scale and well-labeled data in the source domain and trained model parameters can become the major concern of cross center/domain collaborations. In this work, to address this, we propose a practical solution to UDA for segmentation with a black-box segmentation model trained in the source domain only, rather than original source data or a white-box source model. Specifically, we resort to a knowledge distillation scheme with exponential mixup decay (EMD) to gradually learn target-specific representations. In addition, unsupervised entropy minimization is further applied to regularization of the target domain confidence. We evaluated our framework on the BraTS 2018 database, achieving performance on par with white-box source model adaptation approaches.
Compression artifact removal is imperative for more visually pleasing contents after image and video compression. Recent works on compression artifact reduction network (CARN) assume that the same or similar quality of images would be employed for both training and testing, and, accordingly, a model needs a quality factor as a prior to accomplish the task successfully. However, the possible discrepancy will degrade performance substantially in a target if the model confronts a different level of distortion from the training phase. To solve the problem, we propose a novel training scheme of CARN to take an advantage of domain adaptation (DA). Specifically, we assign an image encoded with a different quality factor as a different domain and train a CARN using DA to perform robustly in another domain of a different level of distortion. Experimental results demonstrate that the proposed method achieves superior performance on DIV2K, BSD68, and Set12.
In this paper a sample-adaptive prediction technique is proposed to yield efficient coding performance in an intracoding for screen content video coding. The sample-based prediction is to reduce spatial redundancies in neighboring samples. To this aim, the proposed technique uses a weighted linear combination of neighboring samples and applies the robust optimization technique, namely, ridge estimation to derive the weights in a decoder side. The ridge estimation uses L2 norm based regularization term, and, thus the solution is more robust to high variance samples such as in sharp edges and high color contrasts exhibited in screen content videos. It is demonstrated with the experimental results that the proposed technique provides an improved coding gain as compared to the HEVC screen content video coding reference software.
3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly
outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views
and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is
configured to support multiview compatibility in which texture views are decoded without depth information,
the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated
into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information.
In this paper, we propose a disparity vector derivation method utilizing only the information of texture views.
Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the
derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly
improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the
coded views and 26% BD-rate saving in the synthesized views on average.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.