The source code of this article will be made publicly available for reproducible research inside the community.Multiview representation learning (MVRL) leverages information from multiple views to obtain a common representation summarizing the consistency and complementarity in multiview data. Most previous matrix factorization-based MVRL methods are shallow models that neglect the complex hierarchical information. The recently proposed deep multiview factorization models cannot explicitly capture consistency and complementarity in multiview data. We present the deep multiview concept learning (DMCL) method, which hierarchically factorizes the multiview data, and tries to explicitly model consistent and complementary information and capture semantic structures at the highest abstraction level. We explore two variants of the DMCL framework, DMCL-L and DMCL-N, with respectively linear/nonlinear transformations between adjacent layers. We propose two block coordinate descent-based optimization methods for DMCL-L and DMCL-N. We verify the effectiveness of DMCL on three real-world data sets for both clustering and classification tasks.Channel pruning is an effective technique that has been widely applied to deep neural network compression. However, many existing methods prune from a pretrained model, thus resulting in repetitious pruning and fine-tuning processes. In this article, we propose a dynamical channel pruning method, which prunes unimportant channels at the early stage of training. Rather than utilizing some indirect criteria (e.g., weight norm, absolute weight sum, and reconstruction error) to guide connection or channel pruning, we design criteria directly related to the final accuracy of a network to evaluate the importance of each channel. Specifically, a channelwise gate is designed to randomly enable or disable each channel so that the conditional accuracy changes (CACs) can be estimated under the condition of each channel disabled. Practically, we construct two effective and efficient criteria to dynamically estimate CAC at each iteration of training; thus, unimportant channels can be gradually pruned during the training process. Finally, extensive experiments on multiple data sets (i.e., ImageNet, CIFAR, and MNIST) with various networks (i.e., ResNet, VGG, and MLP) demonstrate that the proposed method effectively reduces the parameters and computations of baseline network while yielding the higher or competitive accuracy. https://www.selleckchem.com/products/terephthalic-acid.html Interestingly, if we Double the initial Channels and then Prune Half (DCPH) of them to baseline's counterpart, it can enjoy a remarkable performance improvement by shaping a more desirable structure.Our previous study has constructed a deep learning model for predicting gastrointestinal infection morbidity based on environmental pollutant indicators in some regions in central China. This article aims to adapt the prediction model for three purposes 1) predicting the morbidity of a different disease in the same region; 2) predicting the morbidity of the same disease in a different region; and 3) predicting the morbidity of a different disease in a different region. We propose a tridirectional transfer learning approach, which achieves the abovementioned three purposes by 1) developing a combined univariate regression and multivariate Gaussian model for establishing the relationship between the morbidity of the target disease and that of the source disease together with the high-level pollutant features in the current source region; 2) using mapping-based deep transfer learning to extend the current model to predict the morbidity of the source disease in both source and target regions; and 3) applying the pattern of the combined model in the source region to the extended model to derive a new combined model for predicting the morbidity of the target disease in the target region. We select gastric cancer as the target disease and use the proposed transfer learning approach to predict its morbidity in the source region and three target regions. The results show that, given only a limited number of labeled samples, our approach achieves an average prediction accuracy of over 80% in the source region and up to 78% in the target regions, which can contribute considerably to improving medical preparedness and response.A least squares support vector machine (LS-SVM) offers performance comparable to that of SVMs for classification and regression. The main limitation of LS-SVM is that it lacks sparsity compared with SVMs, making LS-SVM unsuitable for handling large-scale data due to computation and memory costs. To obtain sparse LS-SVM, several pruning methods based on an iterative strategy were recently proposed but did not consider the quantity constraint on the number of reserved support vectors, as widely used in real-life applications. In this article, a noniterative algorithm is proposed based on the selection of globally representative points (global-representation-based sparse least squares support vector machine, GRS-LSSVM) to improve the performance of sparse LS-SVM. For the first time, we present a model of sparse LS-SVM with a quantity constraint. In solving the optimal solution of the model, we find that using globally representative points to construct the reserved support vector set produces a better solution than other methods. We design an indicator based on point density and point dispersion to evaluate the global representation of points in feature space. Using the indicator, the top globally representative points are selected in one step from all points to construct the reserved support vector set of sparse LS-SVM. After obtaining the set, the decision hyperplane of sparse LS-SVM is directly computed using an algebraic formula. This algorithm only consumes O(N2) in computational complexity and O(N) in memory cost which makes it suitable for large-scale data sets. The experimental results show that the proposed algorithm has higher sparsity, greater stability, and lower computational complexity than the traditional iterative algorithms.In machine learning, it is common to interpret each data sample as a multivariate vector disregarding the correlations among covariates. However, the data may actually be functional, i.e., each data point is a function of some variable, such as time, and the function is discretely sampled. The naive treatment of functional data as traditional multivariate data can lead to poor performance due to the correlations. In this article, we focus on subspace clustering for functional data or curves and propose a new method robust to shift and rotation. The idea is to define a function or curve and all its versions generated by shift and rotation as an equivalent class and then to find the subspace structure among all equivalent classes as the surrogate for all curves. Experimental evaluation on synthetic and real data reveals that this method massively outperforms prior clustering methods in both speed and accuracy when clustering functional data.