opulation.This technical report summarizes the GLOBE Observer data set from 1 April 2016 to 1 December 2019. GLOBE Observer is an ongoing NASA-sponsored international citizen science project that is part of the larger Global Learning and Observations to Benefit the Environment (GLOBE) Program, which has been in operation since 1995. GLOBE Observer has the greatest number of participants and geographic coverage of the citizen science projects in the Earth Science Division at NASA. Participants use the GLOBE Observer mobile app (launched in 2016) to collect atmospheric, hydrologic, and terrestrial observations. The app connects participants to satellite observations from Aqua, Terra, CALIPSO, GOES, Himawari, and Meteosat. Thirty-eight thousand participants have contributed 320,000 observations worldwide, including 1,000,000 georeferenced photographs. It would take an individual more than 13 years to replicate this effort. The GLOBE Observer app has substantially increased the spatial extent and sampling density of GLOBE measurements and more than doubled the number of measurements collected through the GLOBE Program. GLOBE Observer data are publicly available (at observer.globe.gov).A new model validation and performance assessment tool is introduced, the sliding threshold of observation for numeric evaluation (STONE) curve. It is based on the relative operating characteristic (ROC) curve technique, but instead of sorting all observations in a categorical classification, the STONE tool uses the continuous nature of the observations. Rather than defining events in the observations and then sliding the threshold only in the classifier/model data set, the threshold is changed simultaneously for both the observational and model values, with the same threshold value for both data and model. This is only possible if the observations are continuous and the model output is in the same units and scale as the observations, that is, the model is trying to exactly reproduce the data. The STONE curve has several similarities with the ROC curve-plotting probability of detection against probability of false detection, ranging from the (1,1) corner for low thresholds to the (0,0) corner for high thresholds, and values above the zero-intercept unity-slope line indicating better than random predictive ability. The main difference is that the STONE curve can be nonmonotonic, doubling back in both the x and y directions. These ripples reveal asymmetries in the data-model value pairs. This new technique is applied to modeling output of a common geomagnetic activity index as well as energetic electron fluxes in the Earth's inner magnetosphere. It is not limited to space physics applications but can be used for any scientific or engineering field where numerical models are used to reproduce observations.OSIRIS-REx began observing particle ejection events shortly after entering orbit around near-Earth asteroid (101955) Bennu in January 2019. For some of these events, the only observations of the ejected particles come from the first two images taken immediately after the event by OSIRIS-REx's NavCam 1 imager. Without three or more observations of each particle, traditional orbit determination is not possible. However, by assuming that the particles all ejected at the same time and location for a given event, and approximating that their velocities remained constant after ejection (a reasonable approximation for fast-moving particles, i.e., with velocities on the order of 10 cm/s or greater, given Bennu's weak gravity), we show that it is possible to estimate the particles' states from only two observations each. We applied this newly developed technique to reconstruct the particle ejection events observed by the OSIRIS-REx spacecraft during orbit about Bennu. Particles were estimated to have ejected with inertial velocities ranging from 7 cm/s to 3.3 m/s, leading to a variety of trajectory types. Most (>80%) of the analyzed events were estimated to have originated from midlatitude regions and to have occurred after noon (local solar time), between 1244 and 1852. Comparison with higher-fidelity orbit determination solutions for the events with sufficient observations demonstrates the validity of our approach and also sheds light on its biases. Our technique offers the capacity to meaningfully constrain the properties of particle ejection events from limited data.A new model was recently introduced to correct for higher-order ionospheric residual biases in radio occultation (RO) data. The model depends on the α1 and α2 dual-frequency bending angle difference squared, and a factor κ, which varies with time, season, solar activity, and height, needing only the F10.7 solar radio flux index as additional background information. To date, this kappa-correction was analyzed in simulation studies. In this study, we test it on real observed Metop-A RO data. https://www.selleckchem.com/products/lotiglipron.html The goal is to improve the accuracy of monthly mean RO climate records, potentially raising the accuracy of RO data toward higher stratospheric altitudes. We performed a thorough analysis of the kappa-correction, evaluating its ionospheric sensitivity during the solar cycle for monthly RO climatologies and comparing the kappa-corrected RO stratospheric climatologies to three other data sets from reanalysis and passive infrared sounding. We find a clear dependence of the kappa-correction on solar activity, geographic location, and altitude; hence, it reduces systematic errors that vary with the solar cycle. From low to high solar activity conditions, the correction can increase from values of about 0.2 K to more than 2.0 K at altitudes between 40 to 45 km. The correction shifts RO climatologies toward warmer temperatures. With respect to other data sets, however, we found it difficult to draw firm conclusions, because the biases in the other data sets appear to be at similar magnitude as the size of the kappa-correction. Further validation with more accurate data will be useful.PurposeIn vivo optical imaging technologies like high-resolution microendoscopy (HRME) can image nuclei of the oral epithelium. In principle, automated algorithms can then calculate nuclear features to distinguish neoplastic from benign tissue. However, images frequently contain regions without visible nuclei, due to biological and technical factors, decreasing the data available to and accuracy of image analysis algorithms. Approach We developed the nuclear density-confidence interval (ND-CI) algorithm to determine if an HRME image contains sufficient nuclei for classification, or if a better image is required. The algorithm uses a convolutional neural network to exclude image regions without visible nuclei. Then the remaining regions are used to estimate a confidence interval (CI) for the number of abnormal nuclei per   mm 2 , a feature used by a previously developed algorithm (called the ND algorithm), to classify images as benign or neoplastic. The range of the CI determines whether the ND-CI algorithm can classify an image with confidence, and if so, the predicted category.