We present an automated method for registration and mosaicking of multimodal technical images of artworks based on mutual information. We focus on the registration of element distribution maps resulting from macro X-ray fluorescence (MA-XRF) scanning, which can be considered as a layered stack and treated as the moving image. The target fixed image is the visible image of the same artwork. In consecutive stages, a unique, optimised transformation that provides the highest average mutual information across all images in the stack is identified with consensus. This transformation can be applied to the moving image to obtain the best alignment between the moving and fixed images when overlapped.
Significance: Light-field microscopy (LFM) enables fast, light-efficient, volumetric imaging of neuronal activity with calcium indicators. Calcium transients differ in temporal signal-to-noise ratio (tSNR) and spatial confinement when extracted from volumes reconstructed by different algorithms.
Aim: We evaluated the capabilities and limitations of two light-field reconstruction algorithms for calcium fluorescence imaging.
Approach: We acquired light-field image series from neurons either bulk-labeled or filled intracellularly with the red-emitting calcium dye CaSiR-1 in acute mouse brain slices. We compared the tSNR and spatial confinement of calcium signals extracted from volumes reconstructed with synthetic refocusing and Richardson–Lucy three-dimensional deconvolution with and without total variation regularization.
Results: Both synthetic refocusing and Richardson–Lucy deconvolution resolved calcium signals from single cells and neuronal dendrites in three dimensions. Increasing deconvolution iteration number improved spatial confinement but reduced tSNR compared with synthetic refocusing. Volumetric light-field imaging did not decrease calcium signal tSNR compared with interleaved, widefield image series acquired in matched planes.
Conclusions: LFM enables high-volume rate, volumetric imaging of calcium transients in single cell somata (bulk-labeled) and dendrites (intracellularly loaded). The trade-offs identified for tSNR, spatial confinement, and computational cost indicate which of synthetic refocusing or deconvolution can better realize the scientific requirements of future LFM calcium imaging applications.
KEYWORDS: Signal to noise ratio, Deconvolution, 3D image processing, Image resolution, Luminescence, Dendrites, Point spread functions, Neurophotonics, Microscopes, Microlens
Significance: Light-field microscopy (LFM) enables high signal-to-noise ratio (SNR) and light efficient volume imaging at fast frame rates. Voltage imaging with genetically encoded voltage indicators (GEVIs) stands to particularly benefit from LFM’s volumetric imaging capability due to high required sampling rates and limited probe brightness and functional sensitivity.
Aim: We demonstrate subcellular resolution GEVI light-field imaging in acute mouse brain slices resolving dendritic voltage signals in three spatial dimensions.
Approach: We imaged action potential-induced fluorescence transients in mouse brain slices sparsely expressing the GEVI VSFP-Butterfly 1.2 in wide-field microscopy (WFM) and LFM modes. We compared functional signal SNR and localization between different LFM reconstruction approaches and between LFM and WFM.
Results: LFM enabled three-dimensional (3-D) localization of action potential-induced fluorescence transients in neuronal somata and dendrites. Nonregularized deconvolution decreased SNR with increased iteration number compared to synthetic refocusing but increased axial and lateral signal localization. SNR was unaffected for LFM compared to WFM.
Conclusions: LFM enables 3-D localization of fluorescence transients, therefore eliminating the need for structures to lie in a single focal plane. These results demonstrate LFM’s potential for studying dendritic integration and action potential propagation in three spatial dimensions.
Diffusive phenomena are ubiquitous in nature and society, and have been extensively studied in various fields, such as natural sciences and engineering. Recently, however, the more challenging inverse problem of diffusion source detection in a network has started to receive a significant amount of attention. A lot of research has concentrated on finding origins in tree-like networks, however these approaches cannot be easily extended to generic networks. Furthermore, only some methods consider realistic temporal diffusion dynamics. We introduce a novel method to localise the source of multiple rumours in an arbitrary network of known topology, using partial observations of the network nodes. We first present two mathematical models of the discrete-time, susceptible infected propagation dynamics, which accurately capture the diffusion process and have low computational complexity. The first one is a simplified likelihood of infection at a node, at a certain time after the rumour is initiated. The second is a formulation of the infection likelihood of a node, as a function of its shortest distance to the source. We then design an efficient single source detection algorithm, which leverages these mathematical models of diffusion, and the assumption that the start time of the propagation is known. Finally, we show how these methods can be extended to the case when the start time of the rumour is unknown, by taking advantage of the dissimilarity in dynamics of infection, of different nodes in the network. Simulation results show that a high source estimation probability is achieved using a small number of observations.
Two-photon calcium imaging can be used to monitor the activity of thousands of neurons across multiple brain areas at single-cell resolution. To harness the power of this imaging technology, neuroscientists require algorithms to detect from the imaging data the time points at which each neuron was active. We present an algorithm based on Finite Rate of Innovation (FRI) theory to detect neuronal spiking activity from this data. By exploiting the parametric structure of the signal, the activity detection problem can be reduced to the classic FRI problem of reconstructing a stream of Diracs.
Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.
The notion of a graph wavelet gives rise to more advanced processing of data on graphs due to its ability to operate in a localized manner, across newly arising data-dependency structures, with respect to the graph signal and underlying graph structure, thereby taking into consideration the inherent geometry of the data. In this work, we tackle the problem of creating graph wavelet filterbanks on circulant graphs for a sparse representation of certain classes of graph signals. The underlying graph can hereby be data-driven as well as fixed, for applications including image processing and social network theory, whereby clusters can be modelled as circulant graphs, respectively. We present a set of novel graph wavelet filter-bank constructions, which annihilate higher-order polynomial graph signals (up to a border effect) defined on the vertices of undirected, circulant graphs, and are localised in the vertex domain. We give preliminary results on their performance for non-linear graph signal approximation and denoising. Furthermore, we provide extensions to our previously developed segmentation-inspired graph wavelet framework for non-linear image approximation, by incorporating notions of smoothness and vanishing moments, which further improve performance compared to traditional methods.
In the last few years, several new methods have been developed for the sampling and the exact reconstruction of specific classes of non-bandlimited signals known as signals with finite rate of innovation (FRI). This is achieved by using adequate sampling kernels and reconstruction schemes. An important class of such kernels is the one made of functions able to reproduce exponentials.
In this paper we review a new strategy for sampling these signals which is universal in that it works with
any kernel. We do so by noting that meeting the exact exponential reproduction condition is too stringent
a constraint, we thus allow for a controlled error in the reproduction formula in order to use the exponential reproduction idea with any kernel and develop a reconstruction method which is more robust to noise.
We also present a novel method that is able to reconstruct infinite streams of Diracs, even in high noise
scenarios. We sequentially process the discrete samples and output locations and amplitudes of the Diracs in real-time. In this context we also show that we can achieve a high reconstruction accuracy of 1000 Diracs for SNRs as low as 5dB.
In this paper, we propose two multiview image compression methods. The basic concept of both schemes is
the layer-based representation, in which the captured three-dimensional (3D) scene is partitioned into layers
each related to a constant depth in the scene. The first algorithm is a centralized scheme where each layer is
de-correlated using a separable multi-dimensional wavelet transform applied across the viewpoint and spatial
dimensions. The transform is modified to efficiently deal with occlusions and disparity variations for different
depths. Although the method achieves a high compression rate, the joint encoding approach requires the transmission
of all data to the users. By contrast, in an interactive setting, the users request only a subset of the
captured images, but in an unknown order a priori. We address this scenario in the second algorithm using
Distributed Source Coding (DSC) principles which reduces the inter-view redundancy and facilitates random
access at the image level. We demonstrate that the proposed centralized and interactive methods outperform
H.264/MVC and JPEG 2000, respectively.
This paper proposes a new approach to distributed video coding. Distributed video coding is a new paradigm in video coding, which is based on the concept of decoding with side information at the decoder. Such a coding scheme employs a low-complexity encoder, making it well suited for low-power devices such as mobile video cameras.
The uniqueness of our work lies in the combined use of discrete wavelet transform (DWT) and the concept of sampling of signals with finite rate of innnovation (FRI). This enables the decoder to retrieve the motion parameters and reconstruct the video sequence from the low-resolution version of each transmitted frame. Unlike the currently existing practical coders, we do not employ traditional channel coding techniqe. For a simple video sequence with a fixed background, Our preliminary results show that the proposed coding scheme can achieve a better PSNR than JPEG2000-intraframe coding at low bit rates.
The standard separable two-dimensional (2-D) wavelet transform (WT) has recently achieved a great success
in image processing because it provides a sparse representation of smooth images. However, it fails to capture
efficiently one-dimensional (1-D) discontinuities, like edges or contours. These features, being elongated and
characterized by geometrical regularity along different directions, intersect and generate many large magnitude
wavelet coefficients. Since contours are very important elements in visual perception of images, to provide a
good visual quality of compressed images, it is fundamental to preserve good reconstruction of these directional
features. We propose a construction of critically sampled perfect reconstruction transforms with directional
vanishing moments (DVMs) imposed in the corresponding basis functions along different directions, called directionlets.
We also demonstrate the outperforming non-linear approximation (NLA) results achieved by our transforms and we show how to design and implement a novel efficient space-frequency quantization (SFQ) compression algorithm using directionlets. Our new compression method beats the standard SFQ both in terms of mean-square-error (MSE) and visual quality, especially in the low-rate compression regime. We also show that our compression method, does not increase the order of computational complexity as compared to the standard SFQ algorithm.
Recently, it was shown that it is possible to sample classes of signals with finite rate of innovation. These sampling schemes, however, use kernels with infinite support and this leads to complex and instable reconstruction algorithms. In this paper, we show that many signals with finite rate of innovation can be sampled and perfectly reconstructed using kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes any function satisfying Strang-Fix conditions, Exponential Splines and functions with rational Fourier transforms. Our sampling schemes can be used for either 1-D or 2-D signals with finite rate of innovation.
The application of the wavelet transform in image processing is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions. Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show interesting gains compared to the standard two-dimensional analysis.
In this paper, we consider classes of not bandlimited signals, namely, streams of Diracs and piecewise polynomial signals, and show that these signals can be sampled and perfectly reconstructed using wavelets as sampling kernel. Due to the multiresolution structure of the wavelet transform, these new sampling theorems naturally lead to the development of a new resolution enhancement algorithm based on
wavelet footprints. Preliminary results show that this algorithm is
also very resilient to noise.
The application of the wavelet transform in image processing
is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions.
Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show very interesting gains compared to the standard two-dimensional analysis.
In recent years wavelet have had an important impact on signal processing theory and practice. The effectiveness of wavelets is mainly due to their capability of representing piecewise smooth signals with few non-zero coefficients. Away from discontinuities, the inner product between a wavelet and a smooth function will be either zero or very small. At singular points, a finite number of wavelets concentrated around the discontinuity lead to non-zero inner products. This ability of wavelet transform to pack the main signal information in few large coefficients is behind the success of wavelet based denoising algorithms. Indeed, traditional approaches simply consist in thresholding the noisy wavelet coefficients, so the few large coefficients carrying the essential information are usually kept while small coefficients mainly containing, so the few large coefficients carrying the essential information are usually kept while small coefficients mainly containing noise are canceled. However, wavelet denoising suffers of two main drawbacks: it is not shift-invariant and it exhibits pseudo Gibbs phenomenon around discontinuities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.