Cross-modal retrieval has been widely used in the Vision-Language field and has achieved many results, but there is a lack of research in the trajectory-text field. At the same time, the current popular cross-modal retrieval models not only lack fine-grained semantic alignment between different modalities, but also ignore the influence of the grammatical structure of the text on the retrieval effect. To solve the above problems, this paper proposes a dual-stream trajectory text retrieval model combined with graph neural network, combining local and global two cross-modal interaction methods: (1) Local alignment, encoding trajectory points and words respectively after passing through the masking module. Semantic alignment. (2) Global alignment, introducing momentum contrastive learning to achieve trajectory and text retrieval learning. Experimental results show that this hierarchical matching method not only retains the efficient performance of the dual-stream model, but also has higher accuracy than other cross-modal retrieval models, and its R@1 value on the dataset is improved by 3.2%-4.7%.
Event extraction is a key research direction in the field of information extraction. In order to improve the effect of event extraction and solve the problem that the general event extraction method cannot make full use of the text feature information, an event extraction method integrating trigger word features is proposed. By building a remote trigger thesaurus, we can provide additional feature information for the event type classification model, enhance the ability of discovering event trigger words. Then the event arguments extraction model integrates the event type and trigger distance features to improve the representation learning ability. Finally, connecting the event type classification model and the event arguments extraction model in series to complete event extraction. Experiments are carried out on the DuEE dataset and the result shows that our model has more outstanding performance than other models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.