The research method used and the proposed general algorithm for the reconstruction of multidimensional Gaussian processes have not been discussed in the literature.Cities are among the best examples of complex systems. The adaptive components of a city, such as its people, firms, institutions, and physical structures, form intricate and often non-intuitive interdependencies with one another. These interdependencies can be quantified and represented as links of a network that give visibility to otherwise cryptic structural elements of urban systems. Here, we use aspects of information theory to elucidate the interdependence network among labor skills, illuminating parts of the hidden economic structure of cities. Using pairwise interdependencies we compute an aggregate, skills-based measure of system "tightness" of a city's labor force, capturing the degree of integration or internal connectedness of a city's economy. We find that urban economies with higher tightness tend to be more productive in terms of higher GDP per capita. However, related work has shown that cities with higher system tightness are also more negatively affected by shocks. Thus, our skills-based metric may offer additional insights into a city's resilience. Finally, we demonstrate how viewing the web of interdependent skills as a weighted network can lead to additional insights about cities and their economies.The complexity of a heart rate variability (HRV) signal is considered an important nonlinear feature to detect cardiac abnormalities. https://www.selleckchem.com/products/jnj-42756493-erdafitinib.html This work aims at explaining the physiological meaning of a recently developed complexity measurement method, namely, distribution entropy (DistEn), in the context of HRV signal analysis. We thereby propose modified distribution entropy (mDistEn) to remove the physiological discrepancy involved in the computation of DistEn. The proposed method generates a distance matrix that is devoid of over-exerted multi-lag signal changes. Restricted element selection in the distance matrix makes "mDistEn" a computationally inexpensive and physiologically more relevant complexity measure in comparison to DistEn.Differential geometry offers a powerful framework for optimising and characterising finite-time thermodynamic processes, both classical and quantum. Here, we start by a pedagogical introduction to the notion of thermodynamic length. We review and connect different frameworks where it emerges in the quantum regime adiabatically driven closed systems, time-dependent Lindblad master equations, and discrete processes. A geometric lower bound on entropy production in finite-time is then presented, which represents a quantum generalisation of the original classical bound. Following this, we review and develop some general principles for the optimisation of thermodynamic processes in the linear-response regime. These include constant speed of control variation according to the thermodynamic metric, absence of quantum coherence, and optimality of small cycles around the point of maximal ratio between heat capacity and relaxation time for Carnot engines.Predicting complex nonlinear turbulent dynamical systems is an important and practical topic. However, due to the lack of a complete understanding of nature, the ubiquitous model error may greatly affect the prediction performance. Machine learning algorithms can overcome the model error, but they are often impeded by inadequate and partial observations in predicting nature. In this article, an efficient and dynamically consistent conditional sampling algorithm is developed, which incorporates the conditional path-wise temporal dependence into a two-step forward-backward data assimilation procedure to sample multiple distinct nonlinear time series conditioned on short and partial observations using an imperfect model. The resulting sampled trajectories succeed in reducing the model error and greatly enrich the training data set for machine learning forecasts. For a rich class of nonlinear and non-Gaussian systems, the conditional sampling is carried out by solving a simple stochastic differential equation, which is computationally efficient and accurate. The sampling algorithm is applied to create massive training data of multiscale compressible shallow water flows from highly nonlinear and indirect observations. The resulting machine learning prediction significantly outweighs the imperfect model forecast. The sampling algorithm also facilitates the machine learning forecast of a highly non-Gaussian climate phenomenon using extremely short observations.The surface nano-crystallization of Ni2FeCoMo0.5V0.2 medium-entropy alloy was realized by rotationally accelerated shot peening (RASP). The average grain size at the surface layer is ~37 nm, and the nano-grained layer is as thin as ~20 μm. Transmission electron microscopy analysis revealed that deformation twinning and dislocation activities are responsible for the effective grain refinement of the high-entropy alloy. In order to reveal the effectiveness of surface nano-crystallization on the Ni2FeCoMo0.5V0.2 medium-entropy alloy, a common model material, Ni, is used as a reference. Under the same shot peening condition, the surface layer of Ni could only be refined to an average grain size of ~234 nm. An ultrafine grained surface layer is less effective in absorbing strain energy than a nano-grain layer. Thus, grain refinement could be realized at a depth up to 70 μm in the Ni sample.We analyze an agent-based model to estimate how the costs and benefits of users in an online social network (OSN) impact the robustness of the OSN. Benefits are measured in terms of relative reputation that users receive from their followers. They can be increased by direct and indirect reciprocity in following each other, which leads to a core-periphery structure of the OSN. Costs relate to the effort to login, to maintain the profile, etc. and are assumed as constant for all users. The robustness of the OSN depends on the entry and exit of users over time. Intuitively, one would expect that higher costs lead to more users leaving and hence to a less robust OSN. We demonstrate that an optimal cost level exists, which maximizes both the performance of the OSN, measured by means of the long-term average benefit of its users, and the robustness of the OSN, measured by means of the lifetime of the core of the OSN. Our mathematical and computational analyses unfold how changes in the cost level impact reciprocity and subsequently the core-periphery structure of the OSN, to explain the optimal cost level.