Artificial Intelligence (AI) and Machine Learning (ML) based systems have seen tremendous progress in the past years. This unprecedent growth has also opened new challenges and vulnerabilities for keeping AI/ML based systems safe and secure. With a multitude of studies investigating adversarial machine learning (AML) and cyber security for AI/ML systems, there is a need for novel techniques and methodologies for securing these systems. Cyber security is often used as a blanket term meaning all defenses used in the context of cyber. This leaves out methodologies and techniques, being used more offensively, such as cyber deception. This study provides a comprehensive overview of cyber-deception for securing AI/ML systems including its relevance, effectiveness, and its potential for AI/ML assurance. The study provides an overview of behavioral sciences for cyber-deception, the benefits of using cyber deception, and the ethical concerns associated with cyber deception. Additionally, we present a use-case for the utilization of cyber deception with zero-trust architecture (ZTA) for assurance and security for AI/ML based systems.
The automotive industry must implement some of the best cybersecurity practices to protect electronic systems, communication networks, control algorithms, software, users, and data from malicious attacks. An artificial intelligence (AI) & machine learning (ML) based automotive cybersecurity system could help identify potential vulnerabilities in electronic vehicles as they are part of the vehicular cyber physical systems (CPS). The primary reason that cybersecurity challenges in the automotive domain differ significantly from those in other sectors are threats to computer systems within vehicular CPS can cause direct harm to the drivers of the vehicles. There is a pressing need for better understanding of various attacks and defensive approaches for vehicular CPS to better protect them against any potential threats. This study investigates AI/ML attacks and defenses for vehicular CPS systems to extract a better comprehension of how cybersecurity affects computational components within the vehicular CPS, what the standards are & how they differ, the types of prominent attacks against these systems, and finally an overview of defensive approaches for these attacks. We provide a comprehensive overview of the attacks and defensive techniques/methodologies against vehicular cyberphysical systems.
The recent push for fair, trustworthy, and responsible Artificial Intelligence (AI) and Machine Learning (ML) systems have pushed for more explainable systems that are capable of explaining their predictions/decisions and inner workings. This led to the field of Explainable AI (XAI) going through an exponential growth in the past few years. XAI has been crucial in making AI/ML systems more comprehensible. However, XAI is limited to the model that it is being applied to, for both post-hoc or transparent models. Even though XAI can explain the decisions being made by the ML systems, these decisions are based on correlation and not causation. For applications such as tumor classification in the medical field, this can have serious consequences as people’s lives are affected. A potential solution for this challenge is the application of causal learning, which goes beyond the limitations of correlation for ML systems. Causal learning can generate analysis based on cause-and-effect relations within the data. This study compares the results of explanations given by post-hoc XAI systems to the causal features derived from causal graphs via causal discover for image datasets. We investigate how well XAI explanations/interpretations are able to identify the pertinent features within images. Causal graphs are generated for image datasets to extract the causal features that have a direct cause- and-effect relation with the label. These features are then compared to the features highlighted by XAI via feature relevance. The addition of causal learning for image datasets can aide in achieving fairness, bias detection, and mitigation to provide a robust and trustworthy system. We highlight the limitation of XAI tools such as LIME to make predictions based on physical features from images, whereas causal discovery can go beyond the simple pixel based perturbations to identify causal relations from image attributes.
Artificial reasoning systems via Artificial Intelligence (AI) and Machine Learning (ML) have made tremendous progress within the past decade. AI/ML systems have been able to reach unprecedented new levels of autonomy for a multitude of applications ranging from autonomous vehicles to biomedical imaging. This new level of intelligence and freedom for AI/ML systems requires them to have a degree of human-like intelligence in terms of causation beyond the correlation. This, however, has remained a major challenge for investigators when combining causality with AI/ML systems. AI/ML systems that are capable of generating cause and effect relationships are still in their infancy, as the literature highlights. The lack of investigations for causal reasoning systems that are capable of using datasets other than tabular data is well highlighted within literature. Causal learning for image, audio, video, radio-frequency, and other modalities still remain a major challenge. While there are open-source tools available for causal learning with tabular data, there is a lack of tools for other modalities. To this extent, this study proposes a causal learning method with image datasets by using existing tools and methodologies. Specifically, we propose to use existing causal discovery toolboxes for investigating causal relations within image datasets by converting image datasets into tabular form with feature extraction using tools such as auto-encoders and deep neural networks. The converted dataset can then be used to generate causal graphs by using tools such as the Causal Discovery Toolbox to highlight the specific cause and effect relations within the data. For AI/ML systems using causal learning for image datasets via existing tools and methodologies can provide an extra layer of robustness to ensure fairness and trustworthiness.
KEYWORDS: Machine learning, Artificial intelligence, Systems modeling, Evolutionary algorithms, Algorithm development, Library classification systems, Data modeling, Random forests, Performance modeling, Education and training
Artificial intelligence (AI) and machine learning (ML) systems are required to be fair and trustworthy. They must be capable of bias detection and mitigation to achieve robustness. To this end, a plethora of research fields have seen growth in research related to making AI/ML systems more trustworthy. Causal learning and Explainable AI (XAI) are two such fields that have been used extensively in the past few years to achieve explainability and fairness. However, they have been used as separate methodologies, not together. This paper provides a new perspective in using causal learning and XAI together to create a more robust and trustworthy system. Having causality and explainability together in the same model presents an extra layer of robustness, that is not achieved by using either of them individually. We present a use case for combining causality via causal discovery, and explainability via feature relevance. Using causal discovery, the generated causal graphs are compared to the feature relevance plots from the ML model. Directed causal graphs can display the features that are causally relevant for the predictions, and these causally relevant features can be directly compared to the features listed from correlation-based explanations from XAI.
A major concern for artificial reasoning systems to achieve robustness and trustworthiness is causal learning where better explanations are needed to support the underlying tasks. Explanations for observational datasets without ground truth presents a unique challenge. This paper aims to provide a new perspective on explainability and causality by combining the two together. We propose a model which extracts quantitative knowledge from observational data via treatment effect estimation to create better explanations through comparison and validation of the causal features with results from correlation-based feature relevance explanations. Average treatment effect (ATE) estimation is calculated to provide a quantitative comparison of the causal features to the relevant features from explainable AI (XAI). This approach provides a comprehensive approach to generate robust and trustworthy explanations via validations from both causality and XAI to ensure trustworthiness, fairness and bias detection within the data, as well as the AI/ML models for artificial reasoning systems.
KEYWORDS: Data fusion, Data modeling, Sensors, Computer security, Image encryption, Defense and security, Systems modeling, Analytical research, Head, Fuzzy logic
Advancements in computer science, especially artificial intelligence (AI) and machine learning(ML) have brought about a scientific revolution in a plethora of military and commercial applications. One of such area has been data science, where the sheer astronomical amount of available data has spurred sub-fields of research involving its storage, analysis, and use. One such area of focus in recent years has been the fusion of data coming from multiple modalities, called multi-modal data fusion, and their use and analysis for practical and employable applications. Because of the differences within the data types, ranging from infrared/radio-frequency to audio/visual, it is extremely difficult, if not flat-out impossible, to analyze them via one singular method. The need to fuse multiple data types and sources properly and adequately for analysis, therefore arises an extra degree of freedom for data-science. This paper provides a survey for multi-modal data fusion. We provide an in-depth review of multi-modal data fusion themes, and describe the methods for designing and developing such data fusion techniques. We include an overview for the different methods and levels of data fusion techniques. An overview of security of data-fusion techniques, is also provided which highlights the present gaps within the field that need to be addressed.
The recent advances in machine learning (ML) and Artificial Intelligence (AI) have resulted in widespread application of data-driven learning algorithms. Rapid growth of AI/ML and their penetration within a plethora of civilian and military applications, while successful, has also opened new vulnerabilities. It is now clear that ML algorithms for AI systems are viable targets for malicious attacks. Therefore, there is a pressing need for better understanding of adversarial attacks against ML models, in order to secure them against such malicious attacks. In this paper, we present a survey of adversarial machine learning and some associated countermeasures. We also present a taxonomy of ML/AI system attacks that follow the same properties and characteristics, allowing them to be linked with different defensive approaches. A taxonomy is given for both attack and defense, and attacks proposed in the literature are categorized according to our taxonomy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.