The neural network trained with in vitro datasets followed by fine-tuning with in vivo datasets had the highest accuracy at 88.0%. The SR-US images produced with deep learning allowed visualization of vessels as small as 25 μm in diameter, which is below the diffraction limit (wavelength of 110 lm at 14 MHz). The performance of the 3DCNN was encouraging for real-time SR-US imaging with an average processing frame rate for in vivo data of 51 Hz with GPU acceleration.We report the time kinetics of fluorescently labelled microbubbles in capillary-level microvasculature as measured via confocal microscopy and compare these results to ultrasound localization microscopy. The observed 19.4 1 4.2 microbubbles per confocal field-of-view (212 lm x 212 lm) is in excellent agreement with the expected count of 19.1 microbubbles per frame. The estimated time to fully perfuse this capillary network was 193 seconds, which corroborates the values reported in literature. We then modeled the capillary network as an empirically determined discrete-time Markov chain with adjustable microbubble transition probabilities though individual capillaries. Monte Carlo random walk simulations found perfusion times ranging from 24.5 seconds for unbiased Markov chains up to 182 seconds for heterogeneous flow distributions. This pilot study confirms a probability-derived explanation for the long acquisition times required for super-resolution ultrasound localization microscopy.Catheter ablation is a common treatment for arrhythmia, but can fail if lesion lines are non-contiguous.Identification of gaps and non-transmural lesions can reduce the likelihood of treatment failure and recurrent arrhythmia.Intracardiac myocardial elastograph is a strain imaging technique that provides visualization of the lesion line. Lesion size estimation and gap resolution was evaluated in an open chest canine model (n=3), and clinical feasibility was investigated in patients undergoing ablation to treat typical cavotricuspid isthmus atrial flutter (n=5). A lesion line consisting of three lesions and two gaps was generated in each canine left ventricle via epicardial ablation. One lesion was generated in one canine right ventricle. Average lesion and gap areas were measured with high agreement (33 ± 14 mm2 and 30 ± 15 mm2, respectively) when compared against gross pathology (34 1 19 mm2 and 26 ± 11 mm2, respectively). Gaps as small as 11 mm2 (3.6 mm on epicardial surface) were identifiable. Absolute error and relative error in estimated lesion area were 9.3 ± 8.4 mm2 and 31 ± 34 %; error in estimated gap area was 11 ± 9.0 mm2 and 40 ± 29 %. Flutter patients were imaged throughout the procedure. Strain was shown to be capable of differentiating between baseline and after ablation completion as confirmed by conduction block. In all patients, strain decreased in the cavotricuspid isthmus after ablation (mean paired difference of -17 1 11 %, p less then 0.05). IME could potentially become a useful ablation monitoring tool in the clinic.Minimally invasive procedures rely on image guidance for navigation at the operation site to avoid large surgical incisions. images are often used for guidance, but important structures may be not well visible. These structures can be overlaid from pre-operative images and accurate alignment can be established using registration. Registration based on the point-to-plane correspondence model was recently proposed and shown to achieve performance. However, registration may still fail in challenging cases due to a large portion of outliers. In this paper, we describe a correspondence weighting scheme to improve the registration performance. By learning an attention model, inlier correspondences get higher attention in the motion estimation while the outlier correspondences are suppressed. Instead of using per-correspondence labels, our objective function allows to train the model directly by minimizing the registration error. We demonstrate a highly increased robustness, e.g. increasing the success rate from 84.9 % to 97.0 % for spine registration. In contrast to previously proposed methods, we also achieve a high accuracy of around 0.5 mm mean re-projection distance. In addition, our method requires a relatively small amount of training data, is able to learn from simulated data, and generalizes to images with additional structures which are not present during training. Furthermore, a single model can be trained for both, different views and different anatomical structures.PCa is a disease with a wide range of tissue patterns and this adds to its classification difficulty. Moreover, the data source heterogeneity, i.e. inconsistent data collected using different machines, under different conditions, by different operators, from patients of different ethnic groups, etc., further hinders the effectiveness of training a generalized PCa classifier. In this paper, for the first time, a Generative Adversarial Network (GAN)-based three-player minimax game framework is used to tackle data source heterogeneity and to improve PCa classification performance, where a proposed modified U-Net is used as the encoder. Our dataset consists of novel high-frequency ExactVu ultrasound (US) data collected from 693 patients at five data centers. Gleason Scores (GSs) are assigned to the 12 prostatic regions of each patient. Two classification tasks benign vs. malignant and low-vs. high-grade, are conducted and the classification results of different prostatic regions are compared. For benign vs. malignant classification, the three-player minimax game framework achieves an Area Under the Receiver Operating Characteristic (AUC) of 93.4%, a sensitivity of 95.1% and a specificity of 87.7%, respectively, representing significant improvements of 5.0%, 3.9%, and 6.0% compared to those of using heterogeneous data, which confirms its effectiveness in terms of PCa classification.Fetoscopic laser photocoagulation is the most effective treatment for Twin-to-Twin Transfusion Syndrome, a condition affecting twin pregnancies in which there is a deregulation of blood circulation through the placenta, that can be fatal to both babies. For the purposes of surgical planning, we design the first automatic approach to detect and segment the intrauterine cavity from axial, sagittal and coronal MRI stacks. Our methodology relies on the ability of capsule networks to successfully capture the part-whole interdependency of objects in the scene, particularly for unique class instances (i.e., intrauterine cavity). The presented deep Q-CapsNet reinforcement learning framework is built upon a context-adaptive detection policy to generate a bounding box of the womb. A capsule architecture is subsequently designed to segment (or refine) the whole intrauterine cavity. https://www.selleckchem.com/products/VX-745.html This network is coupled with a strided nnU-Net feature extractor, which encodes discriminative feature maps to construct strong primary capsules.