Endoscopic photoacoustic tomography (EPAT) is an interventional application of photoacoustic tomography (PAT) to visualize anatomical features and functional components of biological cavity structures such as nasal cavity, digestive tract or coronary arterial vessels. One of the main challenges in clinical applicability of EPAT is the incomplete acoustic measurements due to the limited detectors or the limited-view acoustic detection enclosed in the cavity. In this case, conventional image reconstruction methodologies suffer from significantly degraded image quality. This work introduces a compressed-sensing (CS)-based method to reconstruct a high-quality image that represents the initial pressure distribution on a luminal cross-section from incomplete discrete acoustic measurements. The method constructs and trains a complete dictionary for the sparse representation of the photoacoustically-induced acoustic measurements. The sparse representation of the complete acoustic signals is then optimally obtained based on the sparse measurements and a sensing matrix. The complete acoustic signals are recovered from the sparse representation by inverse sparse transformation. The image of the initial pressure distribution is finally reconstructed from the recovered complete signals by using the time reversal (TR) algorithm. It was shown with numerical experiments that high-quality images with reduced under-sampling artifacts can be reconstructed from sparse measurements. The comparison results suggest that the proposed method outperforms the standard TR reconstruction by 40% in terms of the structural similarity of the reconstructed images. Acute kidney injury (AKI) commonly occurs in hospitalized patients and can lead to serious medical complications. But it is preventable and potentially reversible with early diagnosis and management. Therefore, several machine learning based predictive models have been built to predict AKI in advance from electronic health records (EHR) data. These models to predict inpatient AKI were always built to make predictions at a particular time, for example, 24 or 48 h from admission. However, hospital stays can be several days long and AKI can develop any time within a few hours. To optimally predict AKI before it develops at any time during a hospital stay, we present a novel framework in which AKI is continually predicted automatically from EHR data over the entire hospital stay. The continual model predicts AKI every time a patient's AKI-relevant variable changes in the EHR. Thus, the model not only is independent of a particular time for making predictions, it can also leverage the latest values of all the AKI-relevant patient variables for making predictions. A method to comprehensively evaluate the overall performance of a continual prediction model is also introduced, and we experimentally show using a large dataset of hospital stays that the continual prediction model out-performs all one-time prediction models in predicting AKI. Genomic profiling of cancer studies has generated comprehensive gene expression patterns for diverse phenotypes. Computational methods which employ transcriptomics datasets have been proposed to model gene expression data. Dynamic Bayesian Networks (DBNs) have been used for modeling time series datasets and for the inference of regulatory networks. Furthermore, cancer classification through DBN-based approaches could reveal the importance of exploiting knowledge from statistically significant genes and key regulatory molecules. Although microarray datasets have been employed extensively by several classification methods for decision making, the use of new knowledge from the pathway level has not been addressed adequately in the literature in terms of DBNs for cancer classification. In the present study, we identify the genes that act as regulators and mediate the activity of transcription factors that have been found in all promoters of our differentially expressed gene sets. https://www.selleckchem.com/products/sodium-l-lactate.html These features serve as potential priors for distinguishing tumor from normal samples using a DBN-based classification approach. We employed three microarray datasets from the Gene Expression Omnibus (GEO) public functional repository and performed differential expression analysis. Promoter and pathway analysis of the identified genes revealed the key regulators which influence the transcription mechanisms of these genes. We applied the DBN algorithm on selected genes and identified the features that can accurately classify the samples into tumors and controls. Both accuracy and Area Under the Curve (AUC) were high for the gene sets comprising of the differentially expressed genes along with their master regulators (accuracy 70.8%-98.5%; AUC 0.562-0.985). "Bad channels" in implantable multi-channel recordings bring troubles into the precise quantitative description and analysis of neural signals, especially in the current "big data" era. In this paper, we combine multimodal features based on local field potentials (LFPs) and spike signals to detect bad channels automatically using machine learning. On the basis of 2632 pairs of LFPs and spike recordings acquired from five pigeons, 12 multimodal features are used to quantify each channel's temporal, frequency, phase and firing-rate properties. We implement seven classifiers in the detection tasks, in which the synthetic minority oversampling technique (SMOTE) system and Fisher weighted Euclidean distance sorting (FWEDS) are used to cope with the class imbalance problem. The results of the two-dimensional scatterplots and classifications demonstrate that correlation coefficient, phase locking value, and coherence have good discriminability. For the multimodal features, almost all the classifiers can obtain high accuracy and bad channel detection rate after the SMOTE operation, in which the Random Forests classifier shows relatively better comprehensive performance (accuracy 0.9092 ± 0.0081, precision 0.9123 ± 0.0100, and recall 0.9057 ± 0.0121). The proposed approach can automatically detect bad channels based on multimodal features, and the results provide valuable references for larger datasets.