Extensive experimental results show that our approach outperforms existing superpixel segmentation methods in boundary alignment and compactness for generating convex superpixels.Food recognition has captured numerous research attention for its importance for health-related applications. The existing approaches mostly focus on the categorization of food according to dish names, while ignoring the underlying ingredient composition. In reality, two dishes with the same name do not necessarily share the exact list of ingredients. Therefore, the dishes under the same food category are not mandatorily equal in nutrition content. Nevertheless, due to limited datasets available with ingredient labels, the problem of ingredient recognition is often overlooked. Furthermore, as the number of ingredients is expected to be much less than the number of food categories, ingredient recognition is more tractable in the real-world scenario. This paper provides an insightful analysis of three compelling issues in ingredient recognition. These issues involve recognition in either image-level or region level, pooling in either single or multiple image scales, learning in either single or multi-task manner. The analysis is conducted on a large food dataset, Vireo Food-251, contributed by this paper. The dataset is composed of 169,673 images with 251 popular Chinese food and 406 ingredients. The dataset includes adequate challenges in scale and complexity to reveal the limit of the current approaches in ingredient recognition.Directly benefiting from the deep learning methods, object detection has witnessed a great performance boost in recent years. However, drone-view object detection remains challenging for two main reasons (1) Objects of tiny-scale with more blurs w.r.t. ground-view objects offer less valuable information towards accurate and robust detection; (2) The unevenly distributed objects make the detection inefficient, especially for regions occupied by crowded objects. Confronting such challenges, we propose an end-to-end global-local self-adaptive network (GLSAN) in this paper. The key components in our GLSAN include a global-local detection network (GLDN), a simple yet efficient self-adaptive region selecting algorithm (SARSA), and a local super-resolution network (LSRN). We integrate a global-local fusion strategy into a progressive scale-varying network to perform more precise detection, where the local fine detector can adaptively refine the target's bounding boxes detected by the global coarse detector via cropping the original images for higher-resolution detection. The SARSA can dynamically crop the crowded regions in the input images, which is unsupervised and can be easily plugged into the networks. Additionally, we train the LSRN to enlarge the cropped images, providing more detailed information for finer-scale feature extraction, helping the detector distinguish foreground and background more easily. The SARSA and LSRN also contribute to data augmentation towards network training, which makes the detector more robust. Extensive experiments and comprehensive evaluations on the VisDrone2019-DET benchmark dataset and UAVDT dataset demonstrate the effectiveness and adaptivity of our method. Towards an industrial application, our network is also applied to a DroneBolts dataset with proven advantages. Our source codes have been available at https//github.com/dengsutao/glsan.The rapid growth of the number of data brings great challenges to clustering, especially the introduction of multi-view data, which collected from multiple sources or represented by multiple features, makes these challenges more arduous. How to clustering large-scale data efficiently has become the hottest topic of current large-scale clustering tasks. Although several accelerated multi-view methods have been proposed to improve the efficiency of clustering large-scale data, they still cannot be applied to some scenarios that require high efficiency because of the high computational complexity. To cope with the issue of high computational complexity of existing multi-view methods when dealing with large-scale data, a fast multi-view clustering model via nonnegative and orthogonal factorization (FMCNOF) is proposed in this paper. Instead of constraining the factor matrices to be nonnegative as traditional nonnegative and orthogonal factorization (NOF), we constrain a factor matrix of this model to be cluster indicator matrix which can assign cluster labels to data directly without extra post-processing step to extract cluster structures from the factor matrix. https://www.selleckchem.com/products/pco371.html Meanwhile, the F-norm instead of the L2-norm is utilized on the FMCNOF model, which makes the model very easy to optimize. Furthermore, an efficient optimization algorithm is proposed to solve the FMCNOF model. Different from the traditional NOF optimization algorithm requiring dense matrix multiplications, our algorithm can divide the optimization problem into three decoupled small size subproblems that can be solved by much less matrix multiplications. Combined with the FMCNOF model and the corresponding fast optimization method, the efficiency of the clustering process can be significantly improved, and the computational complexity is nearly O(n) . Extensive experiments on various benchmark data sets validate our approach can greatly improve the efficiency when achieve acceptable performance.Light Field (LF) offers unique advantages such as post-capture refocusing and depth estimation, but low-light conditions severely limit these capabilities. To restore low-light LFs we should harness the geometric cues present in different LF views, which is not possible using single-frame low-light enhancement techniques. We propose a deep neural network L3Fnet for Low-Light Light Field (L3F) restoration, which not only performs visual enhancement of each LF view but also preserves the epipolar geometry across views. We achieve this by adopting a two-stage architecture for L3Fnet. Stage-I looks at all the LF views to encode the LF geometry. This encoded information is then used in Stage-II to reconstruct each LF view. To facilitate learning-based techniques for low-light LF imaging, we collected a comprehensive LF dataset of various scenes. For each scene, we captured four LFs, one with near-optimal exposure and ISO settings and the others at different levels of low-light conditions varying from low to extreme low-light settings.