Furthermore, a good performance can be obtained using 3 standard uptake value (SUV) features of PET image and 7 clinical features, and its average accuracy, sensitivity, and specificity on all the sets can reach 92%, 91%, and 92%, respectively. Therefore, the fusing features have the potential to predict lung metastasis for STSs.Electroencephalography (EEG) plays an import role in monitoring the brain activities of patients with epilepsy and has been extensively used to diagnose epilepsy. Clinically reading tens or even hundreds of hours of EEG recordings is very time consuming. Therefore, automatic detection of seizure is of great importance. But the huge diversity of EEG signals belonging to different patients makes the task of seizure detection much challenging, for both human experts and automation methods. We propose three deep transfer convolutional neural networks (CNN) for automatic cross-subject seizure detection, based on VGG16, VGG19, and ResNet50, respectively. The original dataset is the CHB-MIT scalp EEG dataset. We use short time Fourier transform to generate time-frequency spectrum images as the input dataset, while positive samples are augmented due to the infrequent nature of seizure. The model parameters pretrained on ImageNet are transferred to our models. And the fine-tuned top layers, with an output layer of two neurons for binary classification (seizure or nonseizure), are trained from scratch. Then, the input dataset are randomly shuffled and divided into three partitions for training, validating, and testing the deep transfer CNNs, respectively. The average accuracies achieved by the deep transfer CNNs based on VGG16, VGG19, and ResNet50 are 97.75%, 98.26%, and 96.17% correspondingly. On those results of experiments, our method could prove to be an effective method for cross-subject seizure detection.We propose a new method for fast organ classification and segmentation of abdominal magnetic resonance (MR) images. Magnetic resonance imaging (MRI) is a new type of high-tech imaging examination fashion in recent years. Recognition of specific target areas (organs) based on MR images is one of the key issues in computer-aided diagnosis of medical images. Artificial neural network technology has made significant progress in image processing based on the multimodal MR attributes of each pixel in MR images. However, with the generation of large-scale data, there are few studies on the rapid processing of large-scale MRI data. To address this deficiency, we present a fast radial basis function artificial neural network (Fast-RBF) algorithm. The importance of our efforts is as follows (1) The proposed algorithm achieves fast processing of large-scale image data by introducing the ε-insensitive loss function, the structural risk term, and the core-set principle. We apply this algorithm to the identification of specific target areas in MR images. (2) For each abdominal MRI case, we use four MR sequences (fat, water, in-phase (IP), and opposed-phase (OP)) and the position coordinates (x, y) of each pixel as the input of the algorithm. We use three classifiers to identify the liver and kidneys in the MR images. Experiments show that the proposed method achieves a higher precision in the recognition of specific regions of medical images and has better adaptability in the case of large-scale datasets than the traditional RBF algorithm.Experimental research on living beings faces several obstacles, which are more than ethical and moral issues. One of the proposed solutions to these situations is the computational modelling of anatomical structures. The present study shows a methodology for obtaining high-biofidelity biomodels, where a novel imagenological technique is used, which applies several CAM/CAD computer programs that allow a better precision for obtaining a biomodel, with highly accurate morphological specifications of the molar and tissues that shape the biomodel. https://www.selleckchem.com/products/1-azakenpaullone.html The biomodel developed is the first lower molar subjected to a basic chewing simulation through the application of the finite element method, resulting in a viable model, able to be subjected to various simulations to analyse molar biomechanical characteristics, as well as pathological conditions to evaluate restorative materials and develop treatment plans. When research is focused in medical and dental investigation aspects, numerical analyses could allow the implementation of several tools commonly used by mechanical engineers to provide new answers to old problems in these areas. With this methodology, it is possible to perform high-fidelity models no matter the size of the anatomical structure, nor the complexity of its structure and internal tissues. So, it can be used in any area of medicine.The diagnosis and treatment of epilepsy is a significant direction for both machine learning and brain science. This paper newly proposes a fast enhanced exemplar-based clustering (FEEC) method for incomplete EEG signal. The algorithm first compresses the potential exemplar list and reduces the pairwise similarity matrix. By processing the most complete data in the first stage, FEEC then extends the few incomplete data into the exemplar list. A new compressed similarity matrix will be constructed and the scale of this matrix is greatly reduced. Finally, FEEC optimizes the new target function by the enhanced α-expansion move method. On the other hand, due to the pairwise relationship, FEEC also improves the generalization of this algorithm. In contrast to other exemplar-based models, the performance of the proposed clustering algorithm is comprehensively verified by the experiments on two datasets.To achieve the robust high-performance computer-aided diagnosis systems for lymph nodes, CT images may be typically collected from multicenter data, which cause the isolated performance of the model based on different data source centers. The variability adaptation problem of lymph node data which is related to the problem of domain adaptation in deep learning differs from the general domain adaptation problem because of the typically larger CT image size and more complex data distributions. Therefore, domain adaptation for this problem needs to consider the shared feature representation and even the conditioning information of each domain so that the adaptation network can capture significant discriminative representations in a domain-invariant space. This paper extracts domain-invariant features based on a cross-domain confounding representation and proposes a cycle-consistency learning framework to encourage the network to preserve class-conditioning information through cross-domain image translations. Compared with the performance of different domain adaptation methods, the accurate rate of our method achieves at least 4.