Then, a global search, which integrates BnB and gradient-based algorithms, is implemented to achieve a coarse alignment for the two scans. During the global search, the registration quality assessment offers a beneficial stop criterion to detect whether a good result is obtained.Deep neural networks can easily be fooled by an adversary using minuscule perturbations to input images. The existing defense techniques suffer greatly under white-box attack settings, where an adversary has full knowledge about the network and can iterate several times to find strong perturbations. We observe that the main reason for the existence of such vulnerabilities is the close proximity of different class samples in the learned feature space of deep models. This allows the model decisions to be totally changed by adding an imperceptible perturbation in the inputs. To counter this, we propose to class-wise disentangle the intermediate feature representations of deep networks specifically forcing the features for each class to lie inside a convex polytope that is maximally separated from the polytopes of other classes. https://www.selleckchem.com/products/taurochenodeoxycholic-acid.html In this manner, the network is forced to learn distinct and distant decision regions for each class. We observe that this simple constraint on the features greatly enhances the robustness of learned models, even against the strongest white-box attacks, without degrading the classification performance on clean images. We report extensive evaluations in both black-box and white-box attack scenarios and show significant gains in comparison to state-of-the-art defenses.Visual captioning, the task of describing an image or a video using one or few sentences, is challenging owing to the complexity of understanding copious visual information and describing it using natural language. Motivated by the success neural machine translation, previous work applies sequence to sequence learning to translate videos into sentences. In this work, different from previous work that encodes visual information using a single flow, we introduce a novel Sibling Convolutional Encoder (SibNet) for visual captioning, which employs a two-branch architecture to collaboratively encode videos. The first content branch encodes visual content information of the video with an autoencoder, capturing visual appearance information of the video as other networks often do. While the second semantic branch encodes semantic information of the video via visual-semantic joint embedding, which brings complementary representation by considering the semantics when extracting features from videos. Then both branches are effectively combined with soft-attention mechanism and finally fed into a RNN decoder to generate captions. With our SibNet explicitly capturing both content and semantic information, the proposed model can better represent rich information in videos. To validate the advantages of SibNet, we conduct experiments on two video captioning benchmarks, YouTube2Text and MSR-VTT. Our results demonstrates that SibNet outperforms existing methods across different evaluation metrics.OBJECTIVE Recently, electroencephalography (EEG)- based brain-computer interfaces (BCIs) have made tremendous progress in increasing communication speed. However, current BCI systems could only implement a small number of command codes, which hampers their applicability. METHODS This study developed a high-speed hybrid BCI system containing as many as 108 instructions, which were encoded by concurrent P300 and steady-state visual evoked potential (SSVEP) features and decoded by an ensemble task-related component analysis method. Notably, besides the frequency-phase-modulated SSVEP and time-modulated P300 features as contained in the traditional hybrid P300 and SSVEP features, this study found two new distinct EEG features for the concurrent P300 and SSVEP features, i.e. time-modulated SSVEP and frequency-phase- modulated P300. Ten subjects spelled in both offline and online cued-guided spelling experiments. Other ten subjects took part in online copy-spelling experiments. RESULTS Offline analyses demonstrate that the concurrent P300 and SSVEP features can provide adequate classification information to correctly select the target from 108 characters in 1.7 seconds. Online cued-guided spelling and copy-spelling tests further show that the proposed BCI system can reach an average information transfer rate (ITR) of 172.46±32.91 bits/min and 164.69±33.32 bits/min respectively, with a peak value of 238.41 bits/min (The demo video of online copy-spelling is enclosed and can be found at https//www.youtube.com/watch?v=EW2Q08oHSBo). CONCLUSION We expand a BCI instruction set to over 100 command codes with high-speed in an efficient manner, which significantly improves the degree of freedom of BCIs. SIGNIFICANCE This study hold promise for broadening the applications of BCI systems.Rotational needle insertion is commonly used in needle biopsy to improve cutting performance. The application of rotational motion for needle insertion has been shown to efficiently reduce the cutting force. However, studies have found that needle rotation can increase tissue damage due to the tissue winding effect. The bidirectional rotation of a needle during insertion can be a solution to avoid tissue winding while maintaining a low cutting force. In this study, needle insertion with bidirectional rotation was investigated by conducting mechanical and optical experiments. First, needle insertion tests were performed on gelatin-based tissue phantom samples to understand the effect of bidirectional needle rotation on the cutting force. Subsequently, the effective strain, which is an indicator of tissue damage, was observed at the cross-sections of samples in the axial and radial directions of the needle by using the digital image correlation (DIC) technology. The primary findings of this study are as follows (1) higher needle insertion speeds result in higher cutting forces and effective strains that occur at the axial cross-section, (2) increase in the needle rotation reduces the cutting force and effective strain at the axial cross-section but increases the effective strain at the radial cross-section, (3) application of bidirectional rotation decreases the mean effective strain at the radial cross-section by 10%-25% while maintaining a low cutting force. In clinical applications, bidirectional rotation can be a useful strategy to simultaneously reduce the cutting force and tissue damage, which leads to better cutting performance and lower risks of bleeding and hematoma.