https://www.selleckchem.com/products/acy-775.html An important need exists for reliable PET tumor-segmentation methods for tasks such as PET-based radiation-therapy planning and reliable quantification of volumetric and radiomic features. To address this need, we propose an automated physics-guided deep-learning-based three-module framework to segment PET images on a per-slice basis. The framework is designed to help address the challenges of limited spatial resolution and lack of clinical training data with known ground-truth tumor boundaries in PET. The first module generates PET images containing highly realistic tumors with known ground-truth using a new stochastic and physics-based approach, addressing lack of training data. The second module trains a modified U-net using these images, helping it learn the tumor-segmentation task. The third module fine-tunes this network using a small-sized clinical dataset with radiologist-defined delineations as surrogate ground-truth, helping the framework learn features potentially missed in simulated tumors. The frd reliable performance in delineating tumors in FDG-PET images of patients with lung cancer. © 2020 Institute of Physics and Engineering in Medicine.We propose a multi-view data analysis approach using radiomics and dosiomics (R&D) texture features for predicting acute-phase weight loss (WL) in lung cancer radiotherapy. Baseline weight of 388 patients who underwent intensity modulated radiation therapy (IMRT) was measured between 1 month prior to and 1 week after the start of IMRT. Weight change between 1 week and 2 months after the commencement of IMRT was analyzed, and dichotomized at 5% WL. Each Patient had a planning CT and contours of gross tumor volume (GTV) and esophagus (ESO). A total of 355 features including clinical parameter (CP), GTV and ESO (GTV&ESO) dose-volume histogram (DVH), GTV radiomics, and GTV&ESO dosiomics features were extracted. R&D features were categorized as first- (L1), second- (L2),