Shear horizontal (SH) waves are commonly generated by periodic permanent magnet (PPM) electromagnetic acoustic transducers (EMATs) in metallic media. Conventional PPM EMATs generate ultrasonic waves, which simultaneously propagate both forwards and backwards. This can be an undesirable characteristic, since the backward wave can be eventually reflected, reaching the receiver transducer where it can mix with the signal of interest. This limitation can be overcome using two side-shifted PPM arrays and racetracks coils to generate SH waves in a single direction. That design relies on the EMAT's wavefront diffraction to produce constructive and destructive interference, but produces unwanted backward travelling side-lobes. Here we present a different design, which uses a conventional PPM array and a dual linear-coil array. The concept was numerically simulated, the main design parameters were assessed and the unidirectional EMAT was experimentally evaluated on an aluminum plate, generating the SH0 guided wave mode nominally in a single direction. The amplitude ratio of the generated waves at the enhanced to the weakened side is above 20 dB. Since the wavefronts from the two sources are perfectly aligned, no obvious backward side-lobes are present in the acoustic field, which can significantly reduce the probability of false alarm of an EMAT detection system.Optoacoustic signals are typically reconstructed into images using inversion algorithms applied in the time-domain. However, time-domain reconstructions can be computationally intensive and therefore slow when large amounts of raw data are collected from an optoacoustic scan. Here we consider a fast weighted ω-κ (FWOK) algorithm operating in the frequency domain to accelerate the inversion in raster-scan optoacoustic mesoscopy (RSOM), while seamlessly incorporating impulse response correction with minimum computational burden. We investigate the FWOK performance with RSOM measurements from phantoms and mice in vivo and obtained 360-fold speed improvement over inversions based on the back-projection algorithm in the time-domain. https://www.selleckchem.com/products/miransertib.html This previously unexplored inversion of in vivo optoacoustic data with impulse response correction in frequency domain reconstructions points to a promising strategy of accelerating optoacoustic imaging computations, toward video-rate tomography.We propose a novel unsupervised deep-learning-based algorithm for dynamic magnetic resonance imaging (MRI) reconstruction. Dynamic MRI requires rapid data acquisition for the study of moving organs such as the heart. We introduce a generalized version of the deep-image-prior approach, which optimizes the weights of a reconstruction network to fit a sequence of sparsely acquired dynamic MRI measurements. Our method needs neither prior training nor additional data. In particular, for cardiac images, it does not require the marking of heartbeats or the reordering of spokes. The key ingredients of our method are threefold 1) a fixed low-dimensional manifold that encodes the temporal variations of images; 2) a network that maps the manifold into a more expressive latent space; and 3) a convolutional neural network that generates a dynamic series of MRI images from the latent variables and that favors their consistency with the measurements in k-space. Our method outperforms the state-of-the-art methods quantitatively and qualitatively in both retrospective and real fetal cardiac datasets. To the best of our knowledge, this is the first unsupervised deep-learning-based method that can reconstruct the continuous variation of dynamic MRI sequences with high spatial resolution.A lot of work has been done towards reconstructing the 3D facial structure from single images by capitalizing on the power of Deep Convolutional Neural Networks (DCNNs). In the recent works, the texture features either correspond to components of a linear texture space or are learned by auto-encoders directly from in-the-wild images. In all cases, the quality of the facial texture reconstruction is still not capable of modeling facial texture with high-frequency details. In this paper, we take a radically different approach and harness the power of Generative Adversarial Networks (GANs) and DCNNs in order to reconstruct the facial texture and shape from single images. That is, we utilize GANs to train a very powerful facial texture prior from a large-scale 3D texture dataset. Then, we revisit the original 3D Morphable Models (3DMMs) fitting making use of non-linear optimization to find the optimal latent parameters that best reconstruct the test image but under a new perspective. In order to be robust towards initialisation and expedite the fitting process, we propose a novel self-supervised regression based approach. We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, to the best of our knowledge, facial texture reconstruction with high-frequency details.This paper presents a context-aware tracing strategy (CATS) for crisp edge detection with deep edge detectors, based on an observation that the localization ambiguity of deep edge detectors is mainly caused by the mixing phenomenon of convolutional neural networks feature mixing in edge classification and side mixing during fusing side predictions. The CATS consists of two modules a novel tracing loss that performs feature unmixing by tracing boundaries for better side edge learning, and a context-aware fusion block that tackles the side mixing by aggregating the complementary merits of learned side edges. Experiments demonstrate that the proposed CATS can be integrated into modern deep edge detectors to improve localization accuracy. With the vanilla VGG-16 backbone, in terms of BSDS dataset, our CATS improves the F-measure (ODS) of the RCF and BDCN deep edge detectors by 12% and 6% respectively when evaluating without using the morphological non-maximal suppression scheme for edge detection.The timely treatment is the crucial element for the survival of patients with brain stroke. Thus, a fast, cost-effective, and portable device is needed for the early and on-the-spot diagnosis of stroke patients. A 3D electromagnetic head imaging system for rapid brain stroke diagnosis with a wearable and lightweight platform is presented. The platform comprises a custom-built flexible cap with a 24-element planar antenna array, and a flexible matching medium layer. The custom-built cap is made out of an engineered polymer-ceramic composite substrate of RTV silicone rubber and aluminum oxide (Al2O3) for enhanced dielectric properties and mechanical flexibility and robustness. The array is arranged into two elliptical rings that are entirely incorporated into the flexible cap. The employed antenna elements within the system are compact with low SAR values over the utilized frequency range of 0.9-2.5 GHz. Moreover, a flexible matching medium layer is introduced on the front of the apertures of the antenna array to enhance the impedance matching with the skin.