Optimal Bayesian Transfer Learning
Author Affiliations +
Abstract
The theory of optimal Bayesian classification assumes that the sample data come from the unknown true feature-label distribution, which is a standard assumption in classification theory. When data from the true feature-label distribution are limited, it is possible to use data from a related feature-label distribution. This is the basic idea behind transfer learning, where data from a source domain are used to augment data from a target domain, which may follow a different feature-label distribution (Pan and Yang, 2010; Weiss et al., 2016). The key issue is to quantify relatedness, which means providing a rigorous mathematical framework to characterize transferability. This can be achieved by extending the OBC framework so that transfer learning from the source to target domain is via a joint prior distribution for the model parameters of the feature-label distributions of the two domains (Karbalayghareh et al., 2018b). In this way, the posterior distribution of the target model parameters can be updated via the joint prior probability distribution function in conjunction with the source and target data.
Online access to SPIE eBooks is limited to subscribing institutions.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Matrices

Thallium

Data modeling

Mathematical modeling

Performance modeling

Device simulation

Error analysis

Back to Top