To investigate the effectiveness of the proposed model, a comprehensive empirical study of VVAMo is conducted using extensive commonly used realistic network datasets. The results obtained show that VVAMo attained superior performances over existing classical and state-of-the-art approaches.Lithology identification plays an essential role in formation characterization and reservoir exploration. As an emerging technology, intelligent logging lithology identification has received great attention recently, which aims to infer the lithology type through the well-logging curves using machine-learning methods. However, the model trained on the interpreted logging data is not effective in predicting new exploration well due to the data distribution discrepancy. In this article, we aim to train a lithology identification model for the target well using a large amount of source-labeled logging data and a small amount of target-labeled data. The challenges of this task lie in three aspects 1) the distribution misalignment; 2) the data divergence; and 3) the cost limitation. To solve these challenges, we propose a novel active adaptation for logging lithology identification (AALLI) framework that combines active learning (AL) and domain adaptation (DA). The contributions of this article are three-fold 1) the domain-discrepancy problem in intelligent logging lithology identification is first investigated in this article, and a novel framework that incorporates AL and DA into lithology identification is proposed to handle the problem; 2) we design a discrepancy-based AL and pseudolabeling (PL) module and an instance importance weighting module to query the most uncertain target information and retain the most confident source information, which solves the challenges of cost limitation and distribution misalignment; and 3) we develop a reliability detecting module to improve the reliability of target pseudolabels, which, together with the discrepancy-based AL and PL module, solves the challenge of data divergence. Extensive experiments on three real-world well-logging datasets demonstrate the effectiveness of the proposed method compared to the baselines.To quantify user-item preferences, a recommender system (RS) commonly adopts a high-dimensional and sparse (HiDS) matrix. Such a matrix can be represented by a non-negative latent factor analysis model relying on a single latent factor (LF)-dependent, non-negative, and multiplicative update algorithm. However, existing models' representative abilities are limited due to their specialized learning objective. To address this issue, this study proposes an α-β-divergence-generalized model that enjoys fast convergence. Its ideas are three-fold 1) generalizing its learning objective with α -β -divergence to achieve highly accurate representation of HiDS data; 2) incorporating a generalized momentum method into parameter learning for fast convergence; and 3) implementing self-adaptation of controllable hyperparameters for excellent practicability. Empirical studies on six HiDS matrices from real RSs demonstrate that compared with state-of-the-art LF models, the proposed one achieves significant accuracy and efficiency gain to estimate huge missing data in an HiDS matrix.Measurement of total-plaque-area (TPA) is important for determining long term risk for stroke and monitoring carotid plaque progression. Since delineation of carotid plaques is required, a deep learning method can provide automatic plaque segmentations and TPA measurements; however, it requires large datasets and manual annotations for training with unknown performance on new datasets. A UNet++ ensemble algorithm was proposed to segment plaques from 2D carotid ultrasound images, trained on three small datasets (n = 33, 33, 34 subjects) and tested on 44 subjects from the SPARC dataset (n = 144, London, Canada). The ensemble was also trained on the entire SPARC dataset and tested with a different dataset (n = 497, Zhongnan Hospital, China). Algorithm and manual segmentations were compared using Dice-similarity-coefficient (DSC), and TPAs were compared using the difference ( ∆TPA), Pearson correlation coefficient (r) and Bland-Altman analyses. Segmentation variability was determined using the intra-class correlation coefficient (ICC) and coefficient-of-variation (CoV). For 44 SPARC subjects, algorithm DSC was 83.3-85.7%, and algorithm TPAs were strongly correlated (r = 0.985-0.988; p less then 0.001) with manual results with marginal biases (0.73-6.75) mm 2 using the three training datasets. Algorithm ICC for TPAs (ICC = 0.996) was similar to intra- and inter-observer manual results (ICC = 0.977, 0.995). Algorithm CoV = 6.98% for plaque areas was smaller than the inter-observer manual CoV (7.54%). For the Zhongnan dataset, DSC was 88.6% algorithm and manual TPAs were strongly correlated (r = 0.972, p less then 0.001) with ∆TPA = -0.44 ±4.05 mm 2 and ICC = 0.985. The proposed algorithm trained on small datasets and segmented a different dataset without retraining with accuracy and precision that may be useful clinically and for research.The coronavirus disease 2019 (COVID-19) has swept all over the world. Due to the limited detection facilities, especially in developing countries, a large number of suspected cases can only receive common clinical diagnosis rather than more effective detections like Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests or CT scans. https://www.selleckchem.com/products/shield-1.html This motivates us to develop a quick screening method via common clinical diagnosis results. However, the diagnostic items of different patients may vary greatly, and there is a huge variation in the dimension of the diagnosis data among different suspected patients, it is hard to process these indefinite dimension data via classical classification algorithms. To resolve this problem, we propose an Indefiniteness Elimination Network (IE-Net) to eliminate the influence of the varied dimensions and make predictions about the COVID-19 cases. The IE-Net is in an encoder-decoder framework fashion, and an indefiniteness elimination operation is proposed to transfer the indefinite dimension feature into a fixed dimension feature.