This application has the potential to transform how COVID-19 researchers use public literature to enable their research. The PLATIPUS application provides the end user with a variety of ways to search, filter, and visualize over 100,00 COVID-19 publications. The PLATIPUS application provides the end user with a variety of ways to search, filter, and visualize over 100,00 COVID-19 publications.[This corrects the article DOI 10.2196/24365.].Most modern neural networks for classification fail to take into account the concept of the unknown. Trained neural networks are usually tested in an unrealistic scenario with only examples from a closed set of known classes. In an attempt to develop a more realistic model, the concept of working in an open set environment has been introduced. This in turn leads to the concept of incremental learning where a model with its own architecture and initial trained set of data can identify unknown classes during the testing phase and autonomously update itself if evidence of a new class is detected. Some problems that arise in incremental learning are inefficient use of resources to retrain the classifier repeatedly and the decrease of classification accuracy as multiple classes are added over time. This process of instantiating new classes is repeated as many times as necessary, accruing errors. To address these problems, this article proposes the classification confidence threshold (CT) approach to prime neural networks for incremental learning to keep accuracies high by limiting forgetting. A lean method is also used to reduce resources used in the retraining of the neural network. The proposed method is based on the idea that a network is able to incrementally learn a new class even when exposed to a limited number samples associated with the new class. This method can be applied to most existing neural networks with minimal changes to network architecture.Deep learning has the potential to dramatically impact navigation and tracking state estimation problems critical to autonomous vehicles and robotics. Measurement uncertainties in state estimation systems based on Kalman and other Bayes filters are typically assumed to be a fixed covariance matrix. This assumption is risky, particularly for ``black box'' deep learning models, in which uncertainty can vary dramatically and unexpectedly. Accurate quantification of multivariate uncertainty will allow for the full potential of deep learning to be used more safely and reliably in these applications. We show how to model multivariate uncertainty for regression problems with neural networks, incorporating both aleatoric and epistemic sources of heteroscedastic uncertainty. We train a deep uncertainty covariance matrix model in two ways directly using a multivariate Gaussian density loss function and indirectly using end-to-end training through a Kalman filter. We experimentally show in a visual tracking problem the large impact that accurate multivariate uncertainty quantification can have on the Kalman filter performance for both in-domain and out-of-domain evaluation data. We additionally show, in a challenging visual odometry problem, how end-to-end filter training can allow uncertainty predictions to compensate for filter weaknesses.In unsupervised domain adaptation (UDA), a classifier for the target domain is trained with massive true-label data from the source domain and unlabeled data from the target domain. However, collecting true-label data in the source domain can be expensive and sometimes impractical. Compared to the true label (TL), a complementary label (CL) specifies a class that a pattern does not belong to, and hence, collecting CLs would be less laborious than collecting TLs. https://www.selleckchem.com/products/icfsp1.html In this article, we propose a novel setting where the source domain is composed of complementary-label data, and a theoretical bound of this setting is provided. We consider two cases of this setting one is that the source domain only contains complementary-label data [completely complementary UDA (CC-UDA)] and the other is that the source domain has plenty of complementary-label data and a small amount of true-label data [partly complementary UDA (PC-UDA)]. To this end, a complementary label adversarial network (CLARINET) is proposed to solve CC-UDA and PC-UDA problems. CLARINET maintains two deep networks simultaneously, with one focusing on classifying the complementary-label source data and the other taking care of the source-to-target distributional adaptation. Experiments show that CLARINET significantly outperforms a series of competent baselines on handwritten digit-recognition and object-recognition tasks.In this article, a novel composite hierarchical antidisturbance control (CHADC) algorithm aided by the information-theoretic learning (ITL) technique is developed for non-Gaussian stochastic systems subject to dynamic disturbances. The whole control process consists of some time-domain intervals called batches. Within each batch, a CHADC scheme is applied to the system, where a disturbance observer (DO) is employed to estimate the dynamic disturbance and a composite control strategy integrating feedforward compensation and feedback control is adopted. The information-theoretic measure (entropy or information potential) is employed to quantify the randomness of the controlled system, based on which the gain matrices of DO and feedback controller are updated between two adjacent batches. In this way, the mean-square stability is guaranteed within each batch, and the system performance is improved along with the progress of batches. The proposed algorithm has enhanced disturbance rejection ability and good applicability to non-Gaussian noise environment, which contributes to extending CHADC theory to the general stochastic case. Finally, simulation examples are included to verify the effectiveness of theoretical results.Recurrent neural networks (RNNs) are widely used for online regression due to their ability to generalize nonlinear temporal dependencies. As an RNN model, long short-term memory networks (LSTMs) are commonly preferred in practice, as these networks are capable of learning long-term dependencies while avoiding the vanishing gradient problem. However, due to their large number of parameters, training LSTMs requires considerably longer training time compared to simple RNNs (SRNNs). In this article, we achieve the online regression performance of LSTMs with SRNNs efficiently. To this end, we introduce a first-order training algorithm with a linear time complexity in the number of parameters. We show that when SRNNs are trained with our algorithm, they provide very similar regression performance with the LSTMs in two to three times shorter training time. We provide strong theoretical analysis to support our experimental results by providing regret bounds on the convergence rate of our algorithm. Through an extensive set of experiments, we verify our theoretical work and demonstrate significant performance improvements of our algorithm with respect to LSTMs and the other state-of-the-art learning models.