Knowledge Enhanced Pre-trained Language Models have recently improved context-aware representations through external knowledge and linguistic knowledge from grammatical or syntactic analysis. The mismatch between text and knowledge graph embeddings in the feature space cannot be resolved in the fine-tuning phase and input module knowledge augmentation. In this paper, we revisit and advance the development of natural language understanding in Chinese and propose an improving Chinese knowledge Enhancement Models with Unified Respresentation Space. Specifically, knowledge embedded in the knowledge graph triples is effectively injected into it based on a novel pre-training task and knowledge-aware masking strategy. We conducted extensive experiments in seven Chinese nature language process tasks to evaluate the proposed model. The experimental results show that Our Model understands the external knowledge more deeply. We also demonstrate the effectiveness of the proposed method by ablation experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.