As one of the most critical characteristics in advanced stage of non-exudative Age-related Macular Degeneration (AMD), Geographic Atrophy (GA) is one of the significant causes of sustained visual acuity loss. Automatic localization of retinal regions affected by GA is a fundamental step for clinical diagnosis. In this paper, we present a novel weakly supervised model for GA segmentation in Spectral-Domain Optical Coherence Tomography (SD-OCT) images. A novel Multi-Scale Class Activation Map (MS-CAM) is proposed to highlight the discriminatory significance regions in localization and detail descriptions. To extract available multi-scale features, we design a Scaling and UpSampling (SUS) module to balance the information content between features of different scales. To capture more discriminative features, an Attentional Fully Connected (AFC) module is proposed by introducing the attention mechanism into the fully connected operations to enhance the significant informative features and suppress less useful ones. Based on the location cues, the final GA region prediction is obtained by the projection segmentation of MS-CAM. The experimental results on two independent datasets demonstrate that the proposed weakly supervised model outperforms the conventional GA segmentation methods and can produce similar or superior accuracy comparing with fully supervised approaches. The source code has been released and is available on GitHub https//github.com/ jizexuan/Multi-Scale-Class-Activation-Map-Tensorflow.The gold standard clinical tool for evaluating visual dysfunction in cases of glaucoma and other disorders of vision remains the visual field or threshold perimetry exam. Administration of this exam has evolved over the years into a sophisticated, standardized, automated algorithm that relies heavily on specifics of disease processes particular to common retinal disorders. The purpose of this study is to evaluate the utility of a novel general estimator applied to visual field testing. A multidimensional psychometric function estimation tool was applied to visual field estimation. This tool is built on semiparametric probabilistic classification rather than multiple logistic regression. It combines the flexibility of nonparametric estimators and the efficiency of parametric estimators. Simulated visual fields were generated from human patients with a variety of diagnoses, and the errors between simulated ground truth and estimated visual fields were quantified. Error rates of the estimates were low, typically within 2 dB units of ground truth on average. The greatest threshold errors appeared to be confined to the portions of the threshold function with the highest spatial frequencies. This method can accurately estimate a variety of visual field profiles with continuous threshold estimates, potentially using a relatively small number of stimuli.Due to the increasing medical data for coronary heart disease (CHD) diagnosis, how to assist doctors to make proper clinical diagnosis has attracted considerable attention. However, it faces many challenges, including personalized diagnosis, high dimensional datasets, clinical privacy concerns and insufficient computing resources. To handle these issues, we propose a novel blockchain-enabled contextual online learning model under local differential privacy for CHD diagnosis in mobile edge computing. Various edge nodes in the network can collaborate with each other to achieve information sharing, which guarantees that CHD diagnosis is suitable and reliable. To support the dynamically increasing dataset, we adopt a top-down tree structure to contain medical records which is partitioned adaptively. Furthermore, we consider patients' contexts (e.g., lifestyle, medical history records, and physical features) to provide more accurate diagnosis. Besides, to protect the privacy of patients and medical transactions without any trusted third party, we utilize the local differential privacy with randomised response mechanism and ensure blockchain-enabled information-sharing authentication under multi-party computation. Based on the theoretical analysis, we confirm that we provide real-time and precious CHD diagnosis for patients with sublinear regret, and achieve efficient privacy protection. The experimental results validate that our algorithm outperforms other algorithm benchmarks on running time, error rate and diagnosis accuracy.Vascular structures in the retina contain important information for the detection and analysis of ocular diseases, including age-related macular degeneration, diabetic retinopathy and glaucoma. Commonly used modalities in diagnosis of these diseases are fundus photography, scanning laser ophthalmoscope (SLO) and fluorescein angiography (FA). https://www.selleckchem.com/products/Nolvadex.html Typically, retinal vessel segmentation is carried out either manually or interactively, which makes it time consuming and prone to human errors. In this research, we propose a new multi-modal framework for vessel segmentation called ELEMENT (vEsseL sEgmentation using Machine lEarning and coNnecTivity). This framework consists of feature extraction and pixel-based classification using region growing and machine learning. The proposed features capture complementary evidence based on grey level and vessel connectivity properties. The latter information is seamlessly propagated through the pixels at the classification phase. ELEMENT reduces inconsistencies and speeds up the segmentation throughput. We analyze and compare the performance of the proposed approach against state-of-the-art vessel segmentation algorithms in three major groups of experiments, for each of the ocular modalities. Our method produced higher overall performance, with an overall accuracy of 97.40%, compared to 25 of the 26 state-of-the-art approaches, including six works based on deep learning, evaluated on the widely known DRIVE fundus image dataset. In the case of the STARE, CHASE-DB, VAMPIRE FA, IOSTAR SLO and RC-SLO datasets, the proposed framework outperformed all of the state-of-the-art methods with accuracies of 98.27%, 97.78%, 98.34%, 98.04% and 98.35%, respectively.