The evaluation aims in two directions First, we give a profound analysis of the different hypotheses to the unknown operator and investigate the influence of numerical training data. Second, we evaluate the performance of the proposed method against the classical rebinning approach. We demonstrate that the derived network achieves better results than the baseline method and that such operators can be trained with simulated data without losing their generality making them applicable to real data without the need for retraining or transfer learning.In this paper a new statistical multivariate model for retinal Optical Coherence Tomography (OCT) B-scans is proposed. Due to the layered structure of OCT images, there is a horizontal dependency between adjacent pixels at specific distances, which led us to propose a more accurate multivariate statistical model to be employed in OCT processing applications such as denoising. Due to the asymmetric form of the probability density function (pdf) in each retinal layer, a generalized version of multivariate Gaussian Scale Mixture (GSM) model, which we refer to as GM-GSM model, is proposed for each retinal layer. In this model, the pixel intensities in each retinal layer are modeled with an asymmetric Bessel K Form (BKF) distribution as a specific form of the GM-GSM model. Then, by combining some layers together, a mixture of GM-GSM model with eight components is proposed. The proposed model is then easily converted to a multivariate Gaussian Mixture model (GMM) to be employed in the spatially constrained GMM denoising algorithm. The Q-Q plot is utilized to evaluate goodness of fit of each component of the final mixture model. The improvement in the noise reduction results based on the GM-GSM model, indicates that the proposed statistical model describes the OCT data more accurately than other competing methods that do not consider spatial dependencies between neighboring pixels.Multispectral photoacoustic tomography (PAT) is capable of resolving tissue chromophore distribution based on spectral un-mixing. It works by identifying the absorption spectrum variations from a sequence of photoacoustic images acquired at multiple illumination wavelengths. Due to multispectral acquisition, this inevitably creates a large dataset. To cut down the data volume, sparse sampling methods that reduce the number of detectors have been developed. However, image reconstruction of sparse sampling PAT is challenging because of insufficient angular coverage. During spectral un-mixing, these inaccurate reconstructions will further amplify imaging artefacts and contaminate the results. To solve this problem, we present the interlaced sparse sampling (ISS) PAT, a method that involved 1) a novel scanning-based image acquisition scheme in which the sparse detector array rotates while switching illumination wavelength, such that a dense angular coverage could be achieved by using only a few detectors; and 2) a corresponding image reconstruction algorithm that makes use of an anatomical prior image created from the ISS strategy to guide PAT image computation. Reconstructed from the signals acquired at different wavelengths (angles), this self-generated prior image fuses multispectral and angular information, and thus has rich anatomical features and minimum artefacts. A specialized iterative imaging model that effectively incorporates this anatomical prior image into the reconstruction process is also developed. Simulation, phantom, and in vivo animal experiments showed that even under 1/6 or 1/8 sparse sampling rate, our method achieved comparable image reconstruction and spectral un-mixing results to those obtained by conventional dense sampling method.Training deep neural networks usually requires a large amount of labeled data to obtain good performance. https://www.selleckchem.com/products/r-hts-3.html However, in medical image analysis, obtaining high-quality labels for the data is laborious and expensive, as accurately annotating medical images demands expertise knowledge of the clinicians. In this paper, we present a novel relation-driven semi-supervised framework for medical image classification. It is a consistency-based method which exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations, and leverages a self-ensembling model to produce high-quality consistency targets for the unlabeled data. Considering that human diagnosis often refers to previous analogous cases to make reliable decisions, we introduce a novel sample relation consistency (SRC) paradigm to effectively exploit unlabeled data by modeling the relationship information among different samples. Superior to existing consistency-based methods which simply enforce consistency of individual predictions, our framework explicitly enforces the consistency of semantic relation among different samples under perturbations, encouraging the model to explore extra semantic information from unlabeled data. We have conducted extensive experiments to evaluate our method on two public benchmark medical image classification datasets, i.e., skin lesion diagnosis with ISIC 2018 challenge and thorax disease classification with ChestX-ray14. Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.Brain imaging genetics becomes more and more important in brain science, which integrates genetic variations and brain structures or functions to study the genetic basis of brain disorders. The multi-modal imaging data collected by different technologies, measuring the same brain distinctly, might carry complementary information. Unfortunately, we do not know the extent to which the phenotypic variance is shared among multiple imaging modalities, which further might trace back to the complex genetic mechanism. In this paper, we propose a novel dirty multi-task sparse canonical correlation analysis (SCCA) to study imaging genetic problems with multi-modal brain imaging quantitative traits (QTs) involved. The proposed method takes advantages of the multi-task learning and parameter decomposition. It can not only identify the shared imaging QTs and genetic loci across multiple modalities, but also identify the modality-specific imaging QTs and genetic loci, exhibiting a flexible capability of identifying complex multi-SNP-multi-QT associations.