ervices.Despite significant efforts, the COVID-19 pandemic has put enormous pressure on health care systems around the world, threatening the quality of patient care. Telemonitoring offers the opportunity to carefully monitor patients with a confirmed or suspected case of COVID-19 from home and allows for the timely identification of worsening symptoms. Additionally, it may decrease the number of hospital visits and admissions, thereby reducing the use of scarce resources, optimizing health care capacity, and minimizing the risk of viral transmission. In this paper, we present a COVID-19 telemonitoring care pathway developed at a tertiary care hospital in the Netherlands, which combined the monitoring of vital parameters with video consultations for adequate clinical assessment. Additionally, we report a series of medical, scientific, organizational, and ethical recommendations that may be used as a guide for the design and implementation of telemonitoring pathways for COVID-19 and other diseases worldwide.3-D radiotherapy is an effective treatment modality for breast cancer. In 3-D radiotherapy, delineation of the clinical target volume (CTV) is an essential step in the establishment of treatment plans. However, manual delineation is subjective and time consuming. In this study, we propose an automated segmentation model based on deep neural networks for the breast cancer CTV in planning computed tomography (CT). Our model is composed of three stages that work in a cascade manner, making it applicable to real-world scenarios. The first stage determines which slices contain CTVs, as not all CT slices include breast lesions. The second stage detects the region of the human body in an entire CT slice, eliminating boundary areas, which may have side effects for the segmentation of the CTV. The third stage delineates the CTV. To permit the network to focus on the breast mass in the slice, a novel dynamically strided convolution operation, which shows better performance than standard convolution, is proposed. To train and evaluate the model, a large dataset containing 455 cases and 50,425 CT slices is constructed. The proposed model achieves an average dice similarity coefficient (DSC) of 0.802 and 0.801 for right-0 and left-sided breast, respectively. Our method shows superior performance to that of previous state-of-the-art approaches.These days, data-driven soft sensors have been widely applied to estimate the difficult-to-measure quality variables in the industrial process. How to extract effective feature representations from complex process data is still the difficult and hot spot in the soft sensing application field. Deep learning (DL), which has made great progresses in many fields recently, has been used for process monitoring and quality prediction purposes for its outstanding nonlinear modeling and feature extraction abilities. In this work, deep stacked autoencoder (SAE) is introduced to construct a soft sensor model. Nevertheless, conventional SAE-based methods do not take information related to target values in the pretraining stage and just use the feature representations in the last hidden layer for final prediction. To this end, a novel gated stacked target-related autoencoder (GSTAE) is proposed for improving modeling performance in view of the above two issues. By adding prediction errors of target values into the loss function when executing a layerwise pretraining procedure, the target-related information is used to guide the feature learning process. Besides, gated neurons are utilized to control the information flow from different layers to the final output neuron that take full advantage of different levels of abstraction representations and quantify their contributions. Finally, the effectiveness and feasibility of the proposed approach are verified in two real industrial cases.In this article, we present a generic locomotion control framework for legged robots and a strategy for control policy optimization. The framework is based on neural control and black-box optimization. The neural control combines a central pattern generator (CPG) and a radial basis function (RBF) network to create a CPG-RBF network. The control network acts as a neural basis to produce arbitrary rhythmic trajectories for the joints of robots. The main features of the CPG-RBF network are 1) it is generic since it can be applied to legged robots with different morphologies; 2) it has few control parameters, resulting in fast learning; 3) it is scalable, both in terms of policy/trajectory complexity and the number of legs that can be controlled using similar trajectories; 4) it does not rely heavily on sensory feedback to generate locomotion and is thus less prone to sensory faults; and 5) once trained, it is simple, minimal, and intuitive to use and analyze. These features will lead to an easy-to-use framework with fast convergence and the ability to encode complex locomotion control policies. https://www.selleckchem.com/products/OSI-906.html In this work, we show that the framework can successfully be applied to three different simulated legged robots with varying morphologies and, even broken joints, to learn locomotion control policies. We also show that after learning, the control policies can also be successfully transferred to a real-world robot without any modifications. We, furthermore, show the scalability of the framework by implementing it as a central controller for all legs of a robot and as a decentralized controller for individual legs and leg pairs. By investigating the correlation between robot morphology and encoding type, we are able to present a strategy for control policy optimization. Finally, we show how sensory feedback can be integrated into the CPG-RBF network to enable online adaptation.Visual question answering (VQA) has been proposed as a challenging task and attracted extensive research attention. It aims to learn a joint representation of the question-image pair for answer inference. Most of the existing methods focus on exploring the multi-modal correlation between the question and image to learn the joint representation. However, the answer-related information is not fully captured by these methods, which results that the learned representation is ineffective to reflect the answer of the question. To tackle this problem, we propose a novel model, i.e., adversarial learning with multi-modal attention (ALMA), for VQA. An adversarial learning-based framework is proposed to learn the joint representation to effectively reflect the answer-related information. Specifically, multi-modal attention with the Siamese similarity learning method is designed to build two embedding generators, i.e., question-image embedding and question-answer embedding. Then, adversarial learning is conducted as an interplay between the two embedding generators and an embedding discriminator.