Red tides are phenomena caused by the abnormal proliferation of marine plankton, leading to massive fish deaths and significant damage to the fishing industry. Currently, detection and quantification of plankton responsible for red tides are performed primarily through manual inspection using optical microscopy, which requires considerable time, effort, and expertise in species identification. This study explored the use of object detection methods to classify various marine plankton species from microscopy images and attempted to automate the detection of red tide phytoplankton.
In the past, paper was a valuable resource, so there are ancient documents where text is written on both the front and back sides of the paper. Among these, the documents written on the reverse side of the paper, are referred to as "Shihai-monjo." In particular, when analyzing the content of documents with a bag-bound structure using images taken from the top of the document, a significant issue arises from the overlapping of text from the front and back sides, causing the Shihai-monjo to become incomplete. In this study, we addressed this issue by applying an image inpainting method based on deep learning to restore the missing Shihai-monjo.
In CNN-based classification for seafloor images, the accuracy may decrease drastically in different sea areas. Therefore, we aim to improve the accuracy by utilizing the dragged environmental sound. Classification by sound includes classification by CNN using logmel images, and we can expect a complementary relationship by using classification by image and sound together. As a concrete method, we propose a robust sediment classification method using transfer learning.
In this paper, we improve the extraction method by GrabCut in order to perform highly accurate stone contour extraction from stone wall images. In previous studies, there was a problem that over-segmentation and under-segmentation occurred because the area considered to be the background characteristics could not be specified properly. As a countermeasure to this problem, in order to make the method of specifying the restrict range for the background characteristics more suitable for stone materials, we aimed to improved the extraction accuracy by using the convex hull of the area obtained by ordinal GrabCut. Good results were obtained by limiting the characteristics due to the contraction of the convex hull, but some stones had insufficient segmentation, the cause was investigated, and future measures were described.
There is an urgent need for three-dimensional analysis of cardiomyocytes using a computer to clarify the mechanism of heart disease. However, because microscopic images include cells other than cardiomyocytes, it is necessary to classify the cells before analysis. Cardiomyocytes are characterized by a relatively low volume fraction of cell nuclei in the cytoplasm compared with other cells. In this study, these features were utilized to extract cell nuclei and cytoplasm from fluorescence microscopy images of neonatal mouse hearts and to classify cardiomyocytes and other cells based on volume ratio. The accuracy of the classification was approximately 90% when the correct answer data were created using the images of fluorescent cardiomyocytes, and the experimental results were compared. This method is considered, based on the experimental results, to be an effective approach for cardiomyocyte classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.