Big data is the bottleneck of deep learning in the current scenario, as data itself comes with the drawbacks of expensive collection costs and even the inability to gather it at all. How to achieve good learning performance in situations with insufficient sample size has increasingly gained attention. The practical value of small sample learning is self-evident, as this technique aims to learn concepts of new classes through a few labeled samples. Data augmentation is the most intuitive approach to address small sample learning, and recent works have demonstrated its feasibility by proposing various data synthesis models. However, data augmentation during model training has a significant drawback. It can easily lead to over-fitting since it relies on a biased distribution formed by only a few training examples. In this paper, we propose a method to generate high-quality pseudo-samples by calculating a regularization factor that constrains the model generator based on statistical distribution information from a large number of classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.