Within many organizations, a vast number of communications, memos, reports and documents have been accumulated in internal servers. Efficiently discovering relevant entries can reduce time spent addressing organizational needs such as personnel skills matching or anomaly resolution. However, per organization, information retrieval on these disparate data types can be challenging, as systems must be designed for their domain while accounting for unstructured and inconsistent datasets. Traditional querying via search terms often requires relevancy tuning by subject matter experts which makes it difficult to build retrieval systems. We argue that development of retrieval systems can be simplified and enhanced by embedding data with Large Language Models (LLMs), organizing information in a Knowledge Graph (KG) structure, and further encoding their relational features through a Graph Neural Network (GNN). One of the major challenges of using GNNs for information retrieval is optimizing negative edge selection. Training GNNs requires a balanced ratio between positive and negative edges however the space of negative edges is exponentially larger than positive edges. In this work, we extend the LLM-GNN hybrid architecture by applying ensemble voting on a set of trained LLM-GNNs. Preliminary results have shown modest improvement on our personnel-document matching tasks. This work contributes to a developmental effort that aims to help engineers and scientists find new research opportunities, learn from past mistakes, and quickly address future needs.
One of the major challenges in deep learning is retrieving sufficiently large labeled training datasets, which can become expensive and time consuming to collect. A unique approach to training segmentation is to use Deep Neural Network (DNN) models with a minimal amount of initial labeled training samples. The procedure involves creating synthetic data and using image registration to calculate affine transformations to apply to the synthetic data. The method takes a small dataset and generates a highquality augmented reality synthetic dataset with strong variance while maintaining consistency with real cases. Results illustrate segmentation improvements in various target features and increased average target confidence.
Image segmentation is one of the fundamental steps in computer vision. Separating targets from background clutter with high precision is a challenging operation for both humans and computers. Currently, segmenting objects from IR images is done by tedious manual work. The implementation of a Deep Neural Network (DNN) to perform precision segmentation of multi-band IR video images is presented. A customized pix2pix DNN with multiple layers of generative encoder/decoder and discriminator architecture is used in the IR image segmentation process. Real and synthetic images and ground truths are employed to train the DNN. Iterative training is performed to achieve optimum accuracy of segmentation using a minimal number of training data. Special training images are created to enhance the missing features and to increase the segmentation accuracy of the objects. Retraining strategies are developed to minimize the DNN training time. Single pixel accuracy has been achieved in IR target boundary segmentation using DNNs. The segmentation accuracy between the customized pix2pix DNN and simple thresholding, GraphCut, simple neural network and ResNet models are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.