The transitional regime of plane channel flow is investigated above the transitional point below which turbulence is not sustained, using direct numerical simulation in large domains. Statistics of laminar-turbulent spatio-temporal intermittency are reported. The geometry of the pattern is first characterized, including statistics for the angles of the laminar-turbulent stripes observed in this regime, with a comparison to experiments. High-order statistics of the local and instantaneous bulk velocity, wall shear stress and turbulent kinetic energy are then provided. The distributions of the two former quantities have non-trivial shapes, characterized by a large kurtosis and/or skewness. Interestingly, we observe a strong linear correlation between their kurtosis and their skewness squared, which is usually reported at much higher Reynolds number in the fully turbulent regime.Uncovering dynamic information flow between stock market indices has been the topic of several studies which exploited the notion of transfer entropy or Granger causality, its linear version. The output of the transfer entropy approach is a directed weighted graph measuring the information about the future state of each target provided by the knowledge of the state of each driving stock market index. In order to go beyond the pairwise description of the information flow, thus looking at higher order informational circuits, here we apply the partial information decomposition to triplets consisting of a pair of driving markets (belonging to America or Europe) and a target market in Asia. Our analysis, on daily data recorded during the years 2000 to 2019, allows the identification of the synergistic information that a pair of drivers carry about the target. By studying the influence of the closing returns of drivers on the subsequent overnight changes of target indexes, we find that (i) Korea, Tokyo, Hong Kong, and Singapore are, in order, the most influenced Asian markets; (ii) US indices SP500 and Russell are the strongest drivers with respect to the bivariate Granger causality; and (iii) concerning higher order effects, pairs of European and American stock market indices play a major role as the most synergetic three-variables circuits. Our results show that the Synergy, a proxy of higher order predictive information flow rooted in information theory, provides details that are complementary to those obtained from bivariate and global Granger causality, and can thus be used to get a better characterization of the global financial system.Much of the field of Machine Learning exhibits a prominent set of failure modes, including vulnerability to adversarial examples, poor out-of-distribution (OoD) detection, miscalibration, and willingness to memorize random labelings of datasets. We characterize these as failures of robust generalization, which extends the traditional measure of generalization as accuracy or related metrics on a held-out set. We hypothesize that these failures to robustly generalize are due to the learning systems retaining too much information about the training data. To test this hypothesis, we propose the Minimum Necessary Information (MNI) criterion for evaluating the quality of a model. In order to train models that perform well with respect to the MNI criterion, we present a new objective function, the Conditional Entropy Bottleneck (CEB), which is closely related to the Information Bottleneck (IB). We experimentally test our hypothesis by comparing the performance of CEB models with deterministic models and Variational Information Bottleneck (VIB) models on a variety of different datasets and robustness challenges. We find strong empirical evidence supporting our hypothesis that MNI models improve on these problems of robust generalization.The study of cosmic rays remains as one of the most challenging research fields in Physics. From the many questions still open in this area, knowledge of the type of primary for each event remains as one of the most important issues. All of the cosmic rays observatories have been trying to solve this question for at least six decades, but have not yet succeeded. The main obstacle is the impossibility of directly detecting high energy primary events, being necessary to use Monte Carlo models and simulations to characterize generated particles cascades. https://www.selleckchem.com/products/rgd-arg-gly-asp-peptides.html This work presents the results attained using a simulated dataset that was provided by the Monte Carlo code CORSIKA, which is a simulator of high energy particles interactions with the atmosphere, resulting in a cascade of secondary particles extending for a few kilometers (in diameter) at ground level. Using this simulated data, a set of machine learning classifiers have been designed and trained, and their computational cost and effectiveness compared, when classifying the type of primary under ideal measuring conditions. Additionally, a feature selection algorithm has allowed for identifying the relevance of the considered features. The results confirm the importance of the electromagnetic-muonic component separation from signal data measured for the problem. The obtained results are quite encouraging and open new work lines for future more restrictive simulations.The connection between endoreversible models of Finite-Time Thermodynamics and the corresponding real running irreversible processes is investigated by introducing two concepts which complement each other Simulation and Reconstruction. In that context, the importance of particular machine diagrams for Simulation and (reconstruction) parameter diagrams for Reconstruction is emphasized. Additionally, the treatment of internal irreversibilities through the use of contact quantities like the contact temperature is introduced into the Finite-Time Thermodynamics description of thermal processes.Recent advances in theoretical and experimental quantum computing raise the problem of verifying the outcome of these quantum computations. The recent verification protocols using blind quantum computing are fruitful for addressing this problem. Unfortunately, all known schemes have relatively high overhead. Here we present a novel construction for the resource state of verifiable blind quantum computation. This approach achieves a better verifiability of 0.866 in the case of classical output. In addition, the number of required qubits is 2N+4cN, where N and c are the number of vertices and the maximal degree in the original computation graph, respectively. In other words, our overhead is less linear in the size of the computational scale. Finally, we utilize the method of repetition and fault-tolerant code to optimise the verifiability.