Vectorizing vortex-core lines is crucial for high-quality visualization and analysis of turbulence. While several techniques exist in the literature, they can only be applied to classical fluids. As quantum fluids with turbulence are gaining attention in physics, extracting and visualizing vortex-core lines for quantum fluids is increasingly desirable. In this paper, we develop an efficient vortex-core line vectorization method for quantum fluids enabling real-time visualization of high-resolution quantum turbulence structure. From a dataset obtained through simulation, our technique first identifies vortex nodes based on the circulation field. To vectorize the vortex-core lines interpolating these vortex nodes, we propose a novel graph-based data structure, with iterative graph reduction and density-guided local optimization, to locate sub-grid-scale vortex-core line samples more precisely, which are then vectorized by continuous curves. https://www.selleckchem.com/products/nik-smi1.html This vortex-core representation naturally captures complex topology, such as branching during reconnection. Our vectorization approach reduces memory consumption by orders of magnitude, enabling real-time visualization performance. Different types of interactive visualizations are demonstrated to show the effectiveness of our technique, which could help further research on quantum turbulence.Human-in-the-loop topic modeling allows users to explore and steer the process to produce better quality topics that align with their needs. When integrated into visual analytic systems, many existing automated topic modeling algorithms are given interactive parameters to allow users to tune or adjust them. However, this has limitations when the algorithms cannot be easily adapted to changes, and it is difficult to realize interactivity closely supported by underlying algorithms. Instead, we emphasize the concept of tight integration, which advocates for the need to co-develop interactive algorithms and interactive visual analytic systems in parallel to allow flexibility and scalability. In this paper, we describe design goals for efficiently and effectively executing the concept of tight integration among computation, visualization, and interaction for hierarchical topic modeling of text data. We propose computational base operations for interactive tasks to achieve the design goals. To instantiate our concept, we present ArchiText, a prototype system for interactive hierarchical topic modeling, which offers fast, flexible, and algorithmically valid analysis via tight integration. Utilizing interactive hierarchical topic modeling, our technique lets users generate, explore, and flexibly steer hierarchical topics to discover more informed topics and their document memberships.In this paper, we investigate the importance of phase for texture discrimination and similarity estimation tasks. We first use two psychophysical experiments to investigate the relative importance of phase and magnitude spectra for human texture discrimination and similarity estimation. The results show that phase is more important to humans for both tasks. We further examine the ability of 51 computational feature sets to perform these two tasks. In contrast with the psychophysical experiments, it is observed that the magnitude data are more important to these computational feature sets than the phase data. We hypothesise that this inconsistency is due to the difference between the abilities of humans and the computational feature sets to utilise phase data. This motivates us to investigate the application of the 51 feature sets to phase-only images in addition to their use on the original data set. This investigation is extended to exploit Convolutional Neural Network (CNN) features. The results show that our feature fusion scheme improves the average performance of those feature sets for estimating humans' perceptual texture similarity. The superior performance should be attributed to the importance of phase to texture similarity.Edge detection is one of the most fundamental operations in the field of image analysis and computer vision as a critical preprocessing step for high-level tasks. It is difficult to give a generic threshold that works well on all images as the image contents are totally different. This paper presents an adaptive, robust and effective edge detector for real-time applications. According to the two-dimensional entropy, the images can be clarified into three groups, each attached with a reference percentage value based on the edge proportion statistics. Compared with the attached points along the gradient direction, anchor points were extracted with high probability to be edge pixels. Taking the segment direction into account, these points were then jointed into different edge segments, each of which was a clean, contiguous, 1-pixel wide chain of pixels. Experimental results indicate that the proposed edge detector outperforms the traditional edge following methods in terms of detection accuracy. Besides, the detection results can be used as the input information for post-processing applications in real-time.Obtained by wide band radar system, high resolution range profile (HRRP) is the projection of scatterers of target to the radar line-of-sight (LOS). HRRP reconstruction is unavoidable for inverse synthetic aperture radar (ISAR) imaging, and of particular usage for target recognition, especially in cases that the ISAR image of target is not able to be achieved. For the high-speed moving target, however, its HRRP is stretched by the high order phase error. To obtain well-focused HRRP, the phase error induced by target velocity should be compensated, utilizing either measured or estimated target velocity. Noting in case of under-sampled data, the traditional velocity estimation and HRRP reconstruction algorithms become invalid, a novel HRRP reconstruction of high-speed target for under-sampled data is proposed. The Laplacian scale mixture (LSM) is used as the sparse prior of HRRP, and the variational Bayesian inference is utilized to derive its posterior, so as to reconstruct it with high resolution from the under-sampled data.