Interestingly, we show that although measurement points with zero counts are not statistically significant, they provide information about the particle distribution function which becomes important when the particle flux is low. We also examine the convergence of the fitting algorithm for expected plasma conditions and discuss the sources of statistical and systematic uncertainties.Prediction of labor is of extreme importance in obstetric care to allow for preventive measures, assuring that both baby and mother have the best possible care. In this work, the authors studied how important nonlinear parameters (entropy and compression) can be as labor predictors. Linear features retrieved from the SisPorto system for cardiotocogram analysis and nonlinear measures were used to predict labor in a dataset of 1072 antepartum tracings, at between 30 and 35 weeks of gestation. Two groups were defined Group A-fetuses whose traces date was less than one or two weeks before labor, and Group B-fetuses whose traces date was at least one or two weeks before labor. Results suggest that, compared with linear features such as decelerations and variability indices, compression improves labor prediction both within one (C-Statistics of 0.728) and two weeks (C-Statistics of 0.704). Moreover, the correlation between compression and long-term variability was significantly different in groups A and B, denoting that compression and heart rate variability look at different information associated with whether the fetus is closer to or further from labor onset. Nonlinear measures, compression in particular, may be useful in improving labor prediction as a complement to other fetal heart rate features.Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially introduced as an information transfer measure used to quantify the statistical coherence between events (time series). Later, it was related to causality, even if they are not the same. There are only few papers reporting applications of causality or TE in neural networks. Our contribution is an information-theoretical method for analyzing information transfer between the nodes of feedforward neural networks. The information transfer is measured by the TE of feedback neural connections. Intuitively, TE measures the relevance of a connection in the network and the feedback amplifies this connection. We introduce a backpropagation type training algorithm that uses TE feedback connections to improve its performance.This paper is a step towards developing a geometric understanding of a popular algorithm for training deep neural networks named stochastic gradient descent (SGD). We built upon a recent result which observed that the noise in SGD while training typical networks is highly non-isotropic. That motivated a deterministic model in which the trajectories of our dynamical systems are described via geodesics of a family of metrics arising from a certain diffusion matrix; namely, the covariance of the stochastic gradients in SGD. Our model is analogous to models in general relativity the role of the electromagnetic field in the latter is played by the gradient of the loss function of a deep network in the former.This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid dynamics and 3D fluid motion analysis are used. The base fluid is water with a laminar flow. The fluid Reynolds number and turn-number of spiral channels are evaluation parameters. The effect of nanoparticles volume fraction in the base fluid on the heat transfer performance of the cooling system is studied. https://www.selleckchem.com/products/diphenhydramine.html Increasing the volume fraction of nanoparticles leads to improving the heat transfer performance of the cooling system. On the other hand, a high-volume fraction of the nanofluid increases the pressure drop of the coolant fluid and increases the required pumping power. This paper aims at finding a trade-off between effective parameters by studying both fluid flow and heat transfer characteristics of the nanofluid.We describe a classifier made of an ensemble of decision trees, designed using information theory concepts. In contrast to algorithms C4.5 or ID3, the tree is built from the leaves instead of the root. Each tree is made of nodes trained independently of the others, to minimize a local cost function (information bottleneck). The trained tree outputs the estimated probabilities of the classes given the input datum, and the outputs of many trees are combined to decide the class. We show that the system is able to provide results comparable to those of the tree classifier in terms of accuracy, while it shows many advantages in terms of modularity, reduced complexity, and memory requirements.The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the task, I ( T ; Y ) , while ensuring that a certain level of compression r is achieved (i.e., I ( X ; T ) ≤ r ). For practical reasons, the problem is usually solved by maximizing the IB Lagrangian (i.e., L IB ( T ; β ) = I ( T ; Y ) - β I ( X ; T ) ) for many values of β ∈ [ 0 , 1 ] . Then, the curve of maximal I ( T ; Y ) for a given I ( X ; T ) is drawn and a representation with the desired predictability and compression is selected. It is known when Y is a deterministic function of X, the IB curve cannot be explored and another Lagrangian has been proposed to tackle this problem the squared IB Lagrangian L sq - IB ( T ; β sq ) = I ( T ; Y ) - β sq I ( X ; T ) 2 . In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes.