Society of North America, Inc.Purpose To develop a multichannel deep neural network (mcDNN) classification model based on multiscale brain functional connectome data and demonstrate the value of this model by using attention deficit hyperactivity disorder (ADHD) detection as an example. Materials and Methods In this retrospective case-control study, existing data from the Neuro Bureau ADHD-200 dataset consisting of 973 participants were used. Multiscale functional brain connectomes based on both anatomic and functional criteria were constructed. The mcDNN model used the multiscale brain connectome data and personal characteristic data (PCD) as joint features to detect ADHD and identify the most predictive brain connectome features for ADHD diagnosis. The mcDNN model was compared with single-channel deep neural network (scDNN) models and the classification performance was evaluated through cross-validation and hold-out validation with the metrics of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). https://www.selleckchem.com/products/bay-1816032.html Results In the cross-validation, the mcDNN model using combined features (fusion of the multiscale brain connectome data and PCD) achieved the best performance in ADHD detection with an AUC of 0.82 (95% confidence interval [CI] 0.80, 0.83) compared with scDNN models using the features of the brain connectome at each individual scale and PCD, independently. In the hold-out validation, the mcDNN model achieved an AUC of 0.74 (95% CI 0.73, 0.76). Conclusion An mcDNN model was developed for multiscale brain functional connectome data, and its utility for ADHD detection was demonstrated. By fusing the multiscale brain connectome data, the mcDNN model improved ADHD detection performance considerably over the use of a single scale.© RSNA, 2019. 2019 by the Radiological Society of North America, Inc.A publicly available dataset containing k-space data as well as Digital Imaging and Communications in Medicine image data of knee images for accelerated MR image reconstruction using machine learning is presented. 2020 by the Radiological Society of North America, Inc.Purpose To evaluate the use of artificial intelligence (AI) to shorten digital breast tomosynthesis (DBT) reading time while maintaining or improving accuracy. Materials and Methods A deep learning AI system was developed to identify suspicious soft-tissue and calcified lesions in DBT images. A reader study compared the performance of 24 radiologists (13 of whom were breast subspecialists) reading 260 DBT examinations (including 65 cancer cases) both with and without AI. Readings occurred in two sessions separated by at least 4 weeks. Area under the receiver operating characteristic curve (AUC), reading time, sensitivity, specificity, and recall rate were evaluated with statistical methods for multireader, multicase studies. Results Radiologist performance for the detection of malignant lesions, measured by mean AUC, increased 0.057 with the use of AI (95% confidence interval [CI] 0.028, 0.087; P less then .01), from 0.795 without AI to 0.852 with AI. Reading time decreased 52.7% (95% CI 41.8%, 61.5%; P less then .01), from 64.1 seconds without to 30.4 seconds with AI. Sensitivity increased from 77.0% without AI to 85.0% with AI (8.0%; 95% CI 2.6%, 13.4%; P less then .01), specificity increased from 62.7% without to 69.6% with AI (6.9%; 95% CI 3.0%, 10.8%; noninferiority P less then .01), and recall rate for noncancers decreased from 38.0% without to 30.9% with AI (7.2%; 95% CI 3.1%, 11.2%; noninferiority P less then .01). Conclusion The concurrent use of an accurate DBT AI system was found to improve cancer detection efficacy in a reader study that demonstrated increases in AUC, sensitivity, and specificity and a reduction in recall rate and reading time.© RSNA, 2019See also the commentary by Hsu and Hoyt in this issue. 2019 by the Radiological Society of North America, Inc.Purpose To describe an unsupervised three-dimensional cardiac motion estimation network (CarMEN) for deformable motion estimation from two-dimensional cine MR images. Materials and Methods A function was implemented using CarMEN, a convolutional neural network that takes two three-dimensional input volumes and outputs a motion field. A smoothness constraint was imposed on the field by regularizing the Frobenius norm of its Jacobian matrix. CarMEN was trained and tested with data from 150 cardiac patients who underwent MRI examinations and was validated on synthetic (n = 100) and pediatric (n = 33) datasets. CarMEN was compared to five state-of-the-art nonrigid body registration methods by using several performance metrics, including Dice similarity coefficient (DSC) and end-point error. Results On the synthetic dataset, CarMEN achieved a median DSC of 0.85, which was higher than all five methods (minimum-maximum median [or MMM], 0.67-0.84; P .05) all other methods. All P values were derived from pairwise testing. For all other metrics, CarMEN achieved better accuracy on all datasets than all other techniques except for one, which had the worst motion estimation accuracy. Conclusion The proposed deep learning-based approach for three-dimensional cardiac motion estimation allowed the derivation of a motion model that balances motion characterization and image registration accuracy and achieved motion estimation accuracy comparable to or better than that of several state-of-the-art image registration algorithms.© RSNA, 2019Supplemental material is available for this article. 2019 by the Radiological Society of North America, Inc.Purpose To investigate the feasibility of using a deep learning-based approach to detect an anterior cruciate ligament (ACL) tear within the knee joint at MRI by using arthroscopy as the reference standard. Materials and Methods A fully automated deep learning-based diagnosis system was developed by using two deep convolutional neural networks (CNNs) to isolate the ACL on MR images followed by a classification CNN to detect structural abnormalities within the isolated ligament. With institutional review board approval, sagittal proton density-weighted and fat-suppressed T2-weighted fast spin-echo MR images of the knee in 175 subjects with a full-thickness ACL tear (98 male subjects and 77 female subjects; average age, 27.5 years) and 175 subjects with an intact ACL (100 male subjects and 75 female subjects; average age, 39.4 years) were retrospectively analyzed by using the deep learning approach. Sensitivity and specificity of the ACL tear detection system and five clinical radiologists for detecting an ACL tear were determined by using arthroscopic results as the reference standard.