These fundamental findings contribute to deciphering the mechanisms of tumour growth and are expected to provide new knowledge towards the development of future bio-nanomachine-based therapeutic approaches for GBM.Drug refractory epilepsy (RE) is believed to be associated with structural lesions, but some RE patients show no significant structural abnormalities (RE-no-SA) on conventional magnetic resonance imaging scans. Since most of the medically controlled epilepsy (MCE) patients also do not exhibit structural abnormalities, a reliable assessment needs to be developed to differentiate RE-no-SA patients and MCE patients to avoid misdiagnosis and inappropriate treatment. Using resting-state scalp electroencephalogram (EEG) datasets, we extracted the spatial pattern of network (SPN) features from the functional and effective EEG networks of both RE-no-SA patients and MCE patients. Compared to the performance of traditional resting-state EEG network properties, the SPN features exhibited remarkable superiority in classifying these two groups of epilepsy patients, and accuracy values of 90.00% and 80.00% were obtained for the SPN features of the functional and effective EEG networks, respectively. By further fusing the SPN features of functional and effective networks, we demonstrated that the highest accuracy value of 96.67% could be reached, with a sensitivity of 100% and specificity of 92.86%. Overall, these findings not only indicate that the fused functional and effective SPN features are promising as reliable measurements for distinguishing RE-no-SA patients and MCE patients but also may provide a new perspective to explore the complex neurophysiology of refractory epilepsy.Magnetic Resonance Imaging (MRI) is a widely used imaging technique to assess brain tumor. Accurately segmenting brain tumor from MR images is the key to clinical diagnostics and treatment planning. In addition, multi-modal MR images can provide complementary information for accurate brain tumor segmentation. However, it's common to miss some imaging modalities in clinical practice. In this paper, we present a novel brain tumor segmentation algorithm with missing modalities. Since it exists a strong correlation between multi-modalities, a correlation model is proposed to specially represent the latent multi-source correlation. Thanks to the obtained correlation representation, the segmentation becomes more robust in the case of missing modality. First, the individual representation produced by each encoder is used to estimate the modality independent parameter. Then, the correlation model transforms all the individual representations to the latent multi-source correlation representations. Finally, the correlation representations across modalities are fused via attention mechanism into a shared representation to emphasize the most important features for segmentation. We evaluate our model on BraTS 2018 and BraTS 2019 dataset, it outperforms the current state-of-the-art methods and produces robust results when one or more modalities are missing.In the few-shot common-localization task, given few support images without bounding box annotations at each episode, the goal is to localize the common object in the query image of unseen categories. The few-shot common-localization task involves common object reasoning from the given images, predicting the spatial locations of the object with different shapes, sizes, and orientations. In this work, we propose a common-centric localization (CCL) network for few-shot common-localization. https://www.selleckchem.com/products/5-ethynyluridine.html The motivation of our common-centric localization network is to learn the common object features by dynamic feature relation reasoning via a graph convolutional network with conditional feature aggregation. First, we propose a local common object region generation pipeline to reduce background noises due to feature misalignment. Each support image predicts more accurate object spatial locations by replacing the query with the images in the support set. Second, we introduce a graph convolutional network with dynamic feature transformation to enforce the common object reasoning. To enhance the discriminability during feature matching and enable a better generalization in unseen scenarios, we leverage a conditional feature encoding function to alter visual features according to the input query adaptively. Third, we introduce a common-centric relation structure to model the correlation between the common features and the query image feature. The generated common features guide the query image feature towards a more common object-related representation. We evaluate our common-centric localization network on four datasets, i.e., CL-VOC-07, CL-VOC-12, CL-COCO, CL-VID. We obtain significant improvements compared to state-of-the-art. Our quantitative results confirm the effectiveness of our network.Analysis of egocentric video has recently drawn attention of researchers in the computer vision as well as multimedia communities. In this paper, we propose a weakly supervised superpixel level joint framework for localization, recognition and summarization of actions in an egocentric video. We first recognize and localize single as well as multiple action(s) in each frame of an egocentric video and then construct a summary of these detected actions. The superpixel level solution helps in precise localization of actions in addition to improving the recognition accuracy. Superpixels are extracted within the central regions of the egocentric video frames; these central regions being determined through a previously developed center-surround model. A sparse spatio-temporal video representation graph is constructed in the deep feature space with the superpixels as nodes. A weakly supervised solution using random walks yields action labels for each superpixel. After determining action label(s) for each frame from its constituent superpixels, we apply a fractional knapsack type formulation for obtaining a summary (of actions). Experimental comparisons on publicly available ADL, GTEA, EGTEA Gaze+, EgoGesture, and EPIC-Kitchens datasets show the effectiveness of the proposed solution.