The strain-generated potential (SGP) is a well-established mechanism in cartilaginous tissues whereby mechanical forces generate electrical potentials. In articular cartilage (AC) and the intervertebral disc (IVD), studies on the SGP have focused on fluid- and ionic-driven effects, namely Donnan, diffusion and streaming potentials. However, recent evidence has indicated a direct coupling between strain and electrical potential. Piezoelectricity is one such mechanism whereby deformation of most biological structures, like collagen, can directly generate an electrical potential. In this review, the SGP in AC and the IVD will be revisited in light of piezoelectricity and mechanotransduction. While the evidence base for physiologically significant piezoelectric responses in tissue is lacking, difficulties in quantifying the physiological response and imperfect measurement techniques may have underestimated the property. Hindering our understanding of the SGP further, numerical models to-date have negated ferroelectric effects in the SGP and have utilised classic Donnan theory that, as evidence argues, may be oversimplified. Moreover, changes in the SGP with degeneration due to an altered extracellular matrix (ECM) indicate that the significance of ionic-driven mechanisms may diminish relative to the piezoelectric response. The SGP, and these mechanisms behind it, are finally discussed in relation to the cell response.Protein aggregation is a topic of immense interest to the scientific community due to its role in several neurodegenerative diseases/disorders and industrial importance. Several in silico techniques, tools, and algorithms have been developed to predict aggregation in proteins and understand the aggregation mechanisms. This review attempts to provide an essence of the vast developments in in silico approaches, resources available, and future perspectives. It reviews aggregation-related databases, mechanistic models (aggregation-prone region and aggregation propensity prediction), kinetic models (aggregation rate prediction), and molecular dynamics studies related to aggregation. With a multitude of prediction models related to aggregation already available to the scientific community, the field of protein aggregation is rapidly maturing to tackle new applications.The challenge to understand the complex neuronal circuit functions in the mammalian brain has brought about a revolution in light-based neurotechnologies and optogenetic tools. However, while recent seminal works have shown excellent insights on the processing of basic functions such as sensory perception, memory, and navigation, understanding more complex brain functions is still unattainable with current technologies. We are just scratching the surface, both literally and figuratively. Yet, the path towards fully understanding the brain is not totally uncertain. Recent rapid technological advancements have allowed us to analyze the processing of signals within dendritic arborizations of single neurons and within neuronal circuits. Understanding the circuit dynamics in the brain requires a good appreciation of the spatial and temporal properties of neuronal activity. Here, we assess the spatio-temporal parameters of neuronal responses and match them with suitable light-based neurotechnologies as well as photochemical and optogenetic tools. We focus on the spatial range that includes dendrites and certain brain regions (e.g., cortex and hippocampus) that constitute neuronal circuits. We also review some temporal characteristics of some proteins and ion channels responsible for certain neuronal functions. With the aid of the photochemical and optogenetic markers, we can use light to visualize the circuit dynamics of a functioning brain. The challenge to understand how the brain works continue to excite scientists as research questions begin to link macroscopic and microscopic units of brain circuits.Both panel-count data and panel-binary data are common data types in recurrent event studies. Because of inconsistent questionnaires or missing data during the follow-ups, mixed data types need to be addressed frequently. A recently proposed semiparametric approach uses a proportional means model to facilitate regression analyses of mixed panel-count and panel-binary data. This method can use all available information regardless of the record type and provide unbiased estimates. However, the large number of nuisance parameters in the nonparametric baseline hazard function makes the estimating procedure very complicated and time-consuming. We approximated the baseline hazard function to simplify the estimating procedure. Simulation studies showed that our method performed similarly to that of the previous semiparametric likelihood-based method, but with much faster speed. Approximating the baseline hazard not only reduced the computational burden but also made it possible to implement the estimating procedure in a standard software, such as SAS.This paper studies model-based and design-based approaches for the analysis of data arising from a stepped wedge randomized design. Specifically, for different scenarios we compare robustness, efficiency, Type I error rate under the null hypothesis, and power under the alternative hypothesis for the leading analytical options including generalized estimating equations (GEE) and linear mixed model (LMM) based approaches. https://www.selleckchem.com/products/blu-451.html We find that GEE models with exchangeable correlation structures are more efficient than GEE models with independent correlation structures under all scenarios considered. The model-based GEE Type I error rate can be inflated when applied with a small number of clusters, but this problem can be solved using a design-based approach. As expected, correct model specification is more important for LMM (compared to GEE) since the model is assumed correct when standard errors are calculated. However, in contrast to the model-based results, the design-based Type I error rates for LMM models under scenarios with a random treatment effect show type I error inflation even though the fitted models perfectly match the corresponding data generating scenarios. Therefore, greater robustness can be realized by combining GEE and permutation testing strategies.