https://www.selleckchem.com/products/pyr-41.html Recent advances in deep convolution neural networks (CNNs) boost the development of video salient object detection (SOD), and many remarkable deep-CNNs video SOD models have been proposed. However, many existing deep-CNNs video SOD models still suffer from coarse boundaries of the salient object, which may be attributed to the loss of high-frequency information. The traditional graph-based video SOD models can preserve object boundaries well by conducting superpixels/supervoxels segmentation in advance, but they perform weaker in highlighting the whole object than the latest deep-CNNs models, limited by heuristic graph clustering algorithms. To tackle this problem, we find a new way to address this issue under the framework of graph convolution networks (GCNs), taking advantage of graph model and deep neural network. Specifically, a superpixel-level spatiotemporal graph is first constructed among multiple frame-pairs by exploiting the motion cues implied in the frame-pairs. Then the graph data is imported into the devised multi-stream attention-aware GCN, where a novel Edge-Gated graph convolution (GC) operation is proposed to boost the saliency information aggregation on the graph data. A novel attention module is designed to encode the spatiotemporal sematic information via adaptive selection of graph nodes and fusion of the static-specific and the motion-specific graph embedding. Finally, a smoothness-aware regularization term is proposed to enhance the uniformity of salient object. Graph nodes (superpixels) inherently belonging to the same class will be ideally clustered together in the learned embedding space. Extensive experiments have been conducted on three widely used datasets. Compared with fourteen state-of-the-art video SOD models, our proposed method can well retain the salient object boundaries and possess a strong learning ability, which shows that this work is a good practice for designing GCNs for vid