https://www.selleckchem.com/products/gw2580.html This paper provides an in-depth study and analysis of human artistic poses through intelligently enhanced multimodal artistic pose recognition. A complementary network model architecture of multimodal information based on motion energy proposed. The network exploits both the rich information of appearance features provided by RGB data and the depth information provided by depth data as well as the characteristics of robustness to luminance and observation angle. The multimodal fusion is accomplished by the complementary information characteristics of the two modalities. Moreover, to better model the long-range temporal structure while considering action classes with sub-action sharing phenomena, an energy-guided video segmentation method is employed. And in the feature fusion stage, a cross-modal cross-fusion approach is proposed, which enables the convolutional network to share local features of two modalities not only in the shallow layer but also to obtain the fusion of global features in the deep convolutal model is realized, and the average correct rate of its hand gesture reaches 99.04%, which improves the robustness and real-time interaction of hand gesture recognition.In this study, we jointly reported in an empirical and a theoretical way, for the first time, two main theories Lavie's perceptual load theory and Gaspelin et al.'s attentional dwelling hypothesis. These theories explain in different ways the modulation of the perceptual load/task difficulty over attentional capture by irrelevant distractors and lead to the observation of the opposite results with similar manipulations. We hypothesized that these opposite results may critically depend on the distractor type used by the two experimental procedures (i.e., distractors inside vs. outside the attentional focus, which could be, respectively, considered as potentially relevant vs. completely irrelevant to the main task). Across a series of experiments,