Recent advances in digital imaging has transformed computer vision and machine learning to new tools for analyzing pathology images. https://www.selleckchem.com/products/jh-x-119-01.html This trend could automate some of the tasks in the diagnostic pathology and elevate the pathologist workload. The final step of any cancer diagnosis procedure is performed by the expert pathologist. These experts use microscopes with high level of optical magnification to observe minute characteristics of the tissue acquired through biopsy and fixed on glass slides. Switching between different magnifications, and finding the magnification level at which they identify the presence or absence of malignant tissues is important. As the majority of pathologists still use light microscopy, compared to digital scanners, in many instance a mounted camera on the microscope is used to capture snapshots from significant field- of-views. Repositories of such snapshots usually do not contain the magnification information. In this paper, we extract deep features of the images available on TCGA dataset with known magnification to train a classifier for magnification recognition. We compared the results with LBP, a well-known handcrafted feature extraction method. The proposed approach achieved a mean accuracy of 96% when a multi-layer perceptron was trained as a classifier.Ki-67 labelling index is a biomarker which is used across the world to predict the aggressiveness of cancer. To compute the Ki-67 index, pathologists normally count the tumour nuclei from the slide images manually; hence it is timeconsuming and is subject to inter pathologist variability. With the development of image processing and machine learning, many methods have been introduced for automatic Ki-67 estimation. But most of them require manual annotations and are restricted to one type of cancer. In this work, we propose a pooled Otsu's method to generate labels and train a semantic segmentation deep neural network (DNN). The output is postprocessed to find the Ki-67 index. Evaluation of two different types of cancer (bladder and breast cancer) results in a mean absolute error of 3.52%. The performance of the DNN trained with automatic labels is better than DNN trained with ground truth by an absolute value of 1.25%.Interstitial Cells of Cajal (ICC) are specialized pacemaker cells that generate and actively propagate electrophysiological events called slow waves. Slow waves regulate the motility of the gastrointestinal tract necessary for digesting food. Degradation in the ICC network structure has been qualitatively associated to several gastrointestinal motility disorders. ICC network structure can be obtained using confocal microscopy, but the current limitations in imaging and segmentation techniques have hindered an accurate representation of the networks. In this study, supervised machine learning techniques were applied to extract the ICC networks from 3D confocal microscopy images. The results showed that the Fast Random Forest classification method using Trainable WEKA Segmentation outperformed the Decision Table and Naïve Bayes classification methods in sensitivity, accuracy, and F-measure. Using the Fast Random Forest classifier, 12 gastric antrum tissue blocks were segmented and variations in ICC network thickness, density and process width were quantified for the myenteric plexus ICC network (the primary pacemakers). Our findings demonstrated regional variation in ICC network density and thickness along the circumferential and longitudinal axis of the mouse antrum. An inverse relationship was observed in the distal and proximal antrum for density (proximal 9.8±4.0% vs distal 7.6±4.6%) and thickness (proximal 15±3 μm vs distal 24±10 μm). Limited variation in ICC process width was observed throughout the antrum (5±1 μm).Clinical Relevance- Detailed quantification of regional ICC structural properties will provide insights into the relationship between ICC structure, slow waves and resultant gut motility. This will improve techniques for the diagnosis and treatment of functional GI motility disorders.Diabetic retinopathy (DR) is a progressive eye disease that affects a large portion of working-age adults. DR, which may progress to an irreversible state that causes blindness, can be diagnosed with a comprehensive dilated eye exam. With the eye dilated, the Doctor takes pictures of the inside of the eye via a medical procedure called Fluorescein Angiography, in which a dye is injected into the bloodstream. The dye highlights the blood vessels in the back of the eye so they can be photographed. In addition, the Doctor may request an Optical Coherence Tomography (OCT) exam, by which cross-sectional photos of the retina are produced to measure the thickness of the retina. Early prognostication is vital in treating the disease and preventing it from progressing into advanced irreversible stages. Skilled medical personnel and necessary medical facilities are required to detect DR in its five major stages. In this paper, we propose a diagnostic tool to detect Diabetic retinopathy from fundus images by using an ensemble of multi-inception CNN networks. Our inception block consists of three Convolutional layers with kernel sizes of 3x3, 5x5, and 1x1 that are concatenated deeply and forwarded to the max-pooling layer. We experimentally compare our proposed method with two pre-trained models VGG16 and GoogleNets. The experiment results show that the proposed method can achieve an accuracy of 93.2% by an ensemble of 10 random networks, compared to 81% obtained with transfer learning based on VGG19.As many algorithms depend on a suitable representation of data, learning unique features is considered a crucial task. Although supervised techniques using deep neural networks have boosted the performance of representation learning, the need for a large sets of labeled data limits the application of such methods. As an example, high-quality delineations of regions of interest in the field of pathology is a tedious and time-consuming task due to the large image dimensions. In this work, we explored the performance of a deep neural network and triplet loss in the area of representation learning. We investigated the notion of similarity and dissimilarity in pathology whole-slide images and compared different setups from unsupervised and semi-supervised to supervised learning in our experiments. Additionally, different approaches were tested, applying few-shot learning on two publicly available pathology image datasets. We achieved high accuracy and generalization when the learned representations were applied to two different pathology datasets.