Brain-computer interfaces (BCIs) measuring electrical activity via electroencephalogram (EEG) have evolved beyond clinical applications to become wireless consumer products. Typically marketed for meditation and neu- rotherapy, these devices are limited in scope and currently too obtrusive to be a ubiquitous wearable. Stemming from recent advancements made in hearing aid technology, wearables have been shrinking to the point that the necessary sensors, circuitry, and batteries can be fit into a small in-ear wearable device.
In this work, an ear-EEG device is created with a novel system for artifact removal and signal interpretation. The small, compact, cost-effective, and discreet device is demonstrated against existing consumer electronics in this space for its signal quality, comfort, and usability. A custom mobile application is developed to process raw EEG from each device and display interpreted data to the user. Artifact removal and signal classification is accomplished via a combination of support matrix machines (SMMs) and soft thresholding of relevant statistical properties.
As biometrics become increasingly pervasive, consumer electronics are reaping the benefits of improved authentication methods. Leveraging the physical characteristics of a user reduces the burden of setting and remembering complex passwords, while enabling stronger security. Multi-factor systems lend further credence to this model, increasing security via multiple passive data points. In recent years, brainwaves have been shown to be another feasible source for biometric authentication. Physically unique to an individual in certain circumstances, the signals can also be changed by the user at will, making them more robust than static physical characteristics. No paradigm is impervious however, and even well-established medical technologies have deficiencies. In this work, a system for biometric authentication via brainwaves is constructed with electroencephalography (EEG). The efficacy of EEG biometrics via existing consumer electronics is evaluated, and vulnerabilities of such a system are enumerated. Impersonation attacks are performed to expose the extent to which the system is vulnerable. Finally, a multimodal system combining EEG with additional factors is recommended and outlined.
In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.