From this perspective, the low-rank and sparse properties are utilized to decompose the range profiles of the main body and micro-motion parts, respectively. Moreover, the sparsity of ISAR image is also utilized as a constraint to eliminate the interference caused by sparse aperture. Hence, SA-ISAR imaging with the removal of m-D effects is modeled as a triply constrained underdetermined optimization problem. The alternating direction method of multipliers (ADMM) and linearized ADMM (L-ADMM) are further utilized to solve the problem with high efficiency. Experimental results based on both simulated and measured data validate the effectiveness of the proposed algorithm.Due to the continuous booming of surveillance and Web videos, video moment localization, as an important branch of video content analysis, has attracted wide attention from both industry and academia in recent years. It is, however, a non-trivial task due to the following challenges temporal context modeling, intelligent moment candidate generation, as well as the necessary efficiency and scalability in practice. To address these impediments, we present a deep end-to-end cross-modal hashing network. To be specific, we first design a video encoder relying on a bidirectional temporal convolutional network to simultaneously generate moment candidates and learn their representations. Considering that the video encoder characterizes temporal contextual structures at multiple scales of time windows, we can thus obtain enhanced moment representations. As a counterpart, we design an independent query encoder towards user intention understanding. https://www.selleckchem.com/products/5-ethynyluridine.html Thereafter, a cross-model hashing module is developed to project these two heterogeneous representations into a shared isomorphic Hamming space for compact hash code learning. After that, we can effectively estimate the relevance score of each "moment-query" pair via the Hamming distance. Besides effectiveness, our model is far more efficient and scalable since the hash codes of videos can be learned offline. Experimental results on real-world datasets have justified the superiority of our model over several state-of-the-art competitors.Ultra-high definition (UHD) 360 videos encoded in fine quality are typically too large to stream in its entirety over bandwidth (BW)-constrained networks. One popular approach is to interactively extract and send a spatial sub-region corresponding to a viewer's current field-of-view (FoV) in a head-mounted display (HMD) for more BW-efficient streaming. Due to the non-negligible round-trip-time (RTT) delay between server and client, accurate head movement prediction foretelling a viewer's future FoVs is essential. In this paper, we cast the head movement prediction task as a sparse directed graph learning problem three sources of relevant information-collected viewers' head movement traces, a 360 image saliency map, and a biological human head model-are distilled into a view transition Markov model. Specifically, we formulate a constrained maximum a posteriori (MAP) problem with likelihood and prior terms defined using the three information sources. We solve the MAP problem alternately using a hybrid iterative reweighted least square (IRLS) and Frank-Wolfe (FW) optimization strategy. In each FW iteration, a linear program (LP) is solved, whose runtime is reduced thanks to warm start initialization. Having estimated a Markov model from data, we employ it to optimize a tile-based 360 video streaming system. Extensive experiments show that our head movement prediction scheme noticeably outperformed existing proposals, and our optimized tile-based streaming scheme outperformed competitors in rate-distortion performance.Quantitative ultrasound (QUS) can reveal crucial information on tissue properties such as scatterer density. If the scatterer density per resolution cell is above or below 10, the tissue is considered as fully developed speckle (FDS) or under-developed speckle (UDS), respectively. Conventionally, the scatterer density has been classified using estimated statistical parameters of the amplitude of backscattered echoes. However, if the patch size is small, the estimation is not accurate. These parameters are also highly dependent on imaging settings. In this paper, we adapt convolutional neural network (CNN) architectures for QUS, and train them using simulation data. We further improve the network's performance by utilizing patch statistics as additional input channels. Inspired by deep supervision and multi-task learning, we propose a second method to exploit patch statistics. We evaluate the networks using simulation data and experimental phantoms. We also compare our proposed methods with different classic and deep learning models and demonstrate their superior performance in the classification of tissues with different scatterer density values. The results also show that we are able to classify scatterer density in different imaging parameters with no need for a reference phantom. This work demonstrates the potential of CNNs in classifying scatterer density in ultrasound images.A straight short-beam linear piezoelectric motor constructed with two sets of ceramic actuators separated with the 1/4 wavelength interval is designed in this article. The piezoelectric ceramic actuators are fabricated in the whole body, which is driven by a two-phase circuit with the same amplitude but phase difference of π/4. Traveling wave is formed by superimposing standing waves generated by each set of ceramic actuators. At the ends of the short beam, a wave-reduction mechanism with larger cross-section area is designed so that wave reflection is effectively diminished to preserve the traveling wave. The currently developed short-beam linear piezoelectric motor is estimated can produce an ideal output speed of 169mm/sec while applying voltage of Vpp=300V@45.49kHz. Instead of operating as a stator to drive a carriage for example, the short-beam linear piezoelectric motor is implemented on a guide slider, therefore a linear piezoelectric motor stage is built. While driving the linear stage employed with a preload 300gw and a friction coefficient of about 0.15, the propulsion force is measured about 4.8N, the speed is about 56mm/sec and the position resolution can achieve in the submicron scale.