This strategy limits the imaging depth and temporal resolution of the method. In this study we overcome these limitations by demonstrating acoustic droplet vaporization with 1.1-MHz high-intensity focused ultrasound. A short-duration, high-amplitude pulse of focused ultrasound provides a sufficiently strong peak negative pressure to initiate vaporization. A custom imaging sequence was developed to enable the synchronization of a HIFU transducer and a linear array imaging transducer. We show visualization of repeated acoustic activation of perfluorohexane nanodroplets in polyacrylamide tissue-mimicking phantoms. We further demonstrate detection of hundreds of vaporization events from individual nanodroplets with activation thresholds well below the tissue cavitation limit. Overall, this approach has the potential to result in reliable and repeatable contrast-enhanced ultrasound imaging at clinically relevant depths.This paper examines a combined supervised-unsupervised framework involving dictionary-based blind learning and deep supervised learning for MR image reconstruction from under-sampled k-space data. A major focus of the work is to investigate the possible synergy of learned features in traditional shallow reconstruction using adaptive sparsity-based priors and deep prior-based reconstruction. Specifically, we propose a framework that uses an unrolled network to refine a blind dictionary learning-based reconstruction. We compare the proposed method with strictly supervised deep learning-based reconstruction approaches on several datasets of varying sizes and anatomies. We also compare the proposed method to alternative approaches for combining dictionary-based methods with supervised learning in MR image reconstruction. The improvements yielded by the proposed framework suggest that the blind dictionary-based approach preserves fine image details that the supervised approach can iteratively refine, suggesting that the features learned using the two methods are complementary.Magnetic resonance imaging (MRI) can provide multiple contrast-weighted images using different pulse sequences and protocols. However, a long acquisition time of the images is a major challenge. To address this limitation, a new pulse sequence referred to as quad-contrast imaging is presented. The quad-contrast sequence enables the simultaneous acquisition of four contrast-weighted images (proton density (PD)-weighted, T2-weighted, PD-fluid attenuated inversion recovery (FLAIR), and T2-FLAIR), and the synthesis of T1-weighted images and T1-and T2-maps in a single scan. The scan time is less than 6 min and is further reduced to 2 min 50 s using a deep learning-based parallel imaging reconstruction. The natively acquired quad contrasts demonstrate high quality images, comparable to those from the conventional scans. The deep learning-based reconstruction successfully reconstructed highly accelerated data (acceleration factor 6), reporting smaller normalized root mean squared errors (NRMSEs) and higher structural similarities (SSIMs) than those from conventional generalized autocalibrating partially parallel acquisitions (GRAPPA)-reconstruction (mean NRMSE of 4.36% vs. 10.54% and mean SSIM of 0.990 vs. 0.953). In particular, the FLAIR contrast is natively acquired and does not suffer from lesion-like artifacts at the boundary of tissue and cerebrospinal fluid, differentiating the proposed method from synthetic imaging methods. The quad-contrast imaging method may have the potentials to be used in a clinical routine as a rapid diagnostic tool.Error disagreement-based active learning (AL) selects the data that maximally update the error of a classification hypothesis. However, poor human supervision (e.g. few labels, improper classifier parameters) may weaken or clutter this update; moreover, the computational cost of performing a greedy search to estimate the errors using a deep neural network is intolerable. In this paper, a novel disagreement coefficient based on distribution, not error, provides a tighter bound on label complexity, which further guarantees its generalization in hyperbolic space. The focal points derived from the squared Lorentzian distance, present more effective hyperbolic representations on aspherical distribution from geometry, replacing the typical Euclidean, kernelized, and Poincar centroids. Experiments on different deep AL tasks show that, the focal representation adopted in a tree-likeliness splitting, significantly perform better than typical baselines of geometric centroids and error disagreement, and state-of-the-art neural network architectures-based AL, dramatically accelerating the learning process.Human performance capture is a highly important computer vision problem with many applications in movie production and virtual/augmented reality. Many previous performance capture approaches either required expensive multi-view setups or did not recover dense space-time coherent geometry with frame-to-frame correspondences. We propose a novel deep learning approach for monocular dense human performance capture. Our method is trained in a weakly supervised manner based on multi-view supervision completely removing the need for training data with 3D ground truth annotations. The network architecture is based on two separate networks that disentangle the task into a pose estimation and a non-rigid surface deformation step. Extensive qualitative and quantitative evaluations show that our approach outperforms the state of the art in terms of quality and robustness. This work is an extended version of [1] where we provide more detailed explanations, comparisons and results as well as applications.We report a miniaturized, minimally invasive high-density neural recording interface that occupies only a 1.53 mm2 footprint for hybrid integration of a flexible probe and a 256-channel integrated circuit chip. To achieve such a compact form factor, we developed a custom flip-chip bonding technique using anisotropic conductive film and analog circuit-under-pad in a tiny pitch of 75 m. To enhance signal-to-noise ratios, we applied a reference-replica topology that can provide the matched input impedance for signal and reference paths in low-noise aimpliers (LNAs). The analog front-end (AFE) consists of LNAs, buffers, programmable gain amplifiers, 10b ADCs, a reference generator, a digital controller, and serial-peripheral interfaces (SPIs). The AFE consumes 51.92 W from 1.2 V and 1.8 V supplies in an area of 0.0161 mm2 per channel, implemented in a 180 nm CMOS process. https://www.selleckchem.com/products/atn-161.html The AFE shows > 60 dB mid-band CMRR, 6.32 Vrms input-referred noise from 0.5 Hz to 10 kHz, and 48 M input impedance at 1 kHz. The fabricated AFE chip was directly flip-chip bonded with a 256-channel flexible polyimide neural probe and assembled in a tiny head-stage PCB.