Traditional recommendation methods suffer from limited performance, which can be addressed by incorporating abundant auxiliary/side information. This article focuses on a personalized music recommender system that incorporates rich content and context data in a unified and adaptive way to address the abovementioned problems. The content information includes music textual content, such as metadata, tags, and lyrics, and the context data incorporate users' behaviors, including music listening records, music playing sequences, and sessions. Specifically, a heterogeneous information network (HIN) is first presented to incorporate different kinds of content and context data. Then, a novel method called content- and context-aware music embedding (CAME) is proposed to obtain the low-dimension dense real-valued feature representations (embeddings) of music pieces from HIN. Especially, one music piece generally highlights different aspects when interacting with various neighbors, and it should have different representations separately. CAME seamlessly combines deep learning techniques, including convolutional neural networks and attention mechanisms, with the embedding model to capture the intrinsic features of music pieces as well as their dynamic relevance and interactions adaptively. Finally, we further infer users' general musical preferences as well as their contextual preferences for music and propose a content- and context-aware music recommendation method. Comprehensive experiments as well as quantitative and qualitative evaluations have been performed on real-world music data sets, and the results show that the proposed recommendation approach outperforms state-of-the-art baselines and is able to handle sparse data effectively.We propose a class of Clifford-valued neutral type neural networks with delays in the leakage term. Using a direct method, that is, without decomposing the Clifford-valued system under consideration into a real-valued system, we obtain sufficient conditions for the existence and global exponential stability of μ-pseudo almost periodic solutions of the Clifford-valued neural network under consideration. Finally, we give a numerical example to show the feasibility of our results.Recent deep trackers have shown superior performance in visual tracking. In this article, we propose a cascaded correlation refinement approach to facilitate the robustness of deep tracking. The core idea is to address accurate target localization and reliable model update in a collaborative way. To this end, our approach cascades multiple stages of correlation refinement to progressively refine target localization. Thus, the localized object could be used to learn an accurate on-the-fly model for improving the reliability of model update. Meanwhile, we introduce an explicit measure to identify the tracking failure and then leverage a simple yet effective look-back scheme to adaptively incorporate the initial model and on-the-fly model to update the tracking model. As a result, the tracking model can be used to localize the target more accurately. Extensive experiments on OTB2013, OTB2015, VOT2016, VOT2018, UAV123, and GOT-10k demonstrate that the proposed tracker achieves the best robustness against the state of the arts.In this article, we investigate a class of memristive neural networks (MNNs) with time-varying delays and leakage delays via sliding mode control (SMC) with and without control disturbance. SMC is used to ensure MNNs' stability. According to the characteristics of the MNNs, we consider the following three models the first is the MNNs with time-varying delays, the second is the MNNs with time-varying delays and the control disturbance, and the third is the MNNs with time-varying delays, leakage delays, and the control disturbance. We quote some assumptions and lemmas to ensure that our main results are true. The sliding surface, the corresponding sliding mode controller, and the Lyapunov functions are constructed in different models to ensure MNNs' stability. Finally, some examples and simulations verify the validity of our main results by solving linear matrix inequality (LMI), and the conclusions and analysis of the results are given.Sampling one or more effective solutions from large search spaces is a recurring idea in machine learning (ML), and sequential optimization has become a popular solution. Typical examples include data summarization, sample mining for predictive modeling, and hyperparameter optimization. Existing solutions attempt to adaptively trade off between global exploration and local exploitation, in which the initial exploratory sample is critical to their success. https://www.selleckchem.com/products/mrtx849.html While discrepancy-based samples have become the de facto approach for exploration, results from computer graphics suggest that coverage-based designs, e.g., Poisson disk sampling, can be a superior alternative. In order to successfully adopt coverage-based sample designs to ML applications, which were originally developed for 2-D image analysis, we propose fundamental advances by constructing a parameterized family of designs with provably improved coverage characteristics and developing algorithms for effective sample synthesis. Using experiments in sample mining and hyperparameter optimization for supervised learning, we show that our approach consistently outperforms the existing exploratory sampling methods in both blind exploration and sequential search with Bayesian optimization.Learning with streaming data has received extensive attention during the past few years. Existing approaches assume that the feature space is fixed or changes by following explicit regularities, limiting their applicability in real-time applications. For example, in a smart healthcare platform, the feature space of the patient data varies when different medical service providers use nonidentical feature sets to describe the patients' symptoms. To fill the gap, we in this article propose a novel learning paradigm, namely, Generative Learning With Streaming Capricious (GLSC) data, which does not make any assumption on the feature space dynamics. In other words, GLSC handles the data streams with a varying feature space, where each arriving data instance can arbitrarily carry new features and/or stop carrying partial old features. Specifically, GLSC trains a learner on a universal feature space that establishes relationships between old and new features, so that the patterns learned in the old feature space can be used in the new feature space.