We propose a new measure (Γ) to quantify the degree of self-similarity of a shape using branch length similarity (BLS) entropy which is defined on a simple network consisting of a single node and its branches. To investigate the properties of this measure, we computed the Γ values for 70 object groups (20 shapes in each group) in the MPEG-7 shape database and performed grouping on the values. With relatively high Γ values, identical groups had visually similar shapes. On the other hand, the identical groups with low Γ values had visually different shapes. However, the aspect of topological similarity of the shapes also warrants consideration. The shapes of statistically different groups exhibited significant visual difference from each other. Also, in order to show that the Γ can have a wide variety of applicability when properly used with other variables, we showed that the finger gestures in the (Γ, Z) space are successfully classified. Here, the Z means a correlation coefficient value between entropy profiles for gesture shapes. As shown in the applications, Γ has a strong advantage over conventional geometric measures in that it captures the geometrical and topological properties of a shape together. If we could define the BLS entropy for color, Γ could be used to characterize images expressed in RGB. We briefly discussed the problems to be solved before the applicability of Γ can be expanded to various fields.In this paper, we propose the surface codes (SCs)-based multipartite quantum communication networks (QCNs). We describe an approach that enables us to simultaneously entangle multiple nodes in an arbitrary network topology based on the SCs. We also describe how to extend the transmission distance between arbitrary two nodes by using the SCs. The numerical results indicate that transmission distance between nodes can be extended to beyond 1000 km by employing simple syndrome decoding. Finally, we describe how to operate the proposed QCN by employing the software-defined networking (SDN) concept.In this work we considered the quantum Otto cycle within an optimization framework. The goal was maximizing the power for a heat engine or maximizing the cooling power for a refrigerator. In the field of finite-time quantum thermodynamics it is common to consider frictionless trajectories since these have been shown to maximize the work extraction during the adiabatic processes. Furthermore, for frictionless cycles, the energy of the system decouples from the other degrees of freedom, thereby simplifying the mathematical treatment. Instead, we considered general limit cycles and we used analytical techniques to compute the derivative of the work production over the whole cycle with respect to the time allocated for each of the adiabatic processes. By doing so, we were able to directly show that the frictionless cycle maximizes the work production, implying that the optimal power production must necessarily allow for some friction generation so that the duration of the cycle is reduced.Domain generation algorithms (DGAs) use specific parameters as random seeds to generate a large number of random domain names to prevent malicious domain name detection. This greatly increases the difficulty of detecting and defending against botnets and malware. Traditional models for detecting algorithmically generated domain names generally rely on manually extracting statistical characteristics from the domain names or network traffic and then employing classifiers to distinguish the algorithmically generated domain names. These models always require labor intensive manual feature engineering. In contrast, most state-of-the-art models based on deep neural networks are sensitive to imbalance in the sample distribution and cannot fully exploit the discriminative class features in domain names or network traffic, leading to decreased detection accuracy. To address these issues, we employ the borderline synthetic minority over-sampling algorithm (SMOTE) to improve sample balance. We also propose a recurrent convolutional neural network with spatial pyramid pooling (RCNN-SPP) to extract discriminative and distinctive class features. The recurrent convolutional neural network combines a convolutional neural network (CNN) and a bi-directional long short-term memory network (Bi-LSTM) to extract both the semantic and contextual information from domain names. We then employ the spatial pyramid pooling strategy to refine the contextual representation by capturing multi-scale contextual information from domain names. The experimental results from different domain name datasets demonstrate that our model can achieve 92.36% accuracy, an 89.55% recall rate, a 90.46% F1-score, and 95.39% AUC in identifying DGA and legitimate domain names, and it can achieve 92.45% accuracy rate, a 90.12% recall rate, a 90.86% F1-score, and 96.59% AUC in multi-classification problems. It achieves significant improvement over existing models in terms of accuracy and robustness.The correct classification of requirements has become an essential task within software engineering. This study shows a comparison among the text feature extraction techniques, and machine learning algorithms to the problem of requirements engineer classification to answer the two major questions "Which works best (Bag of Words (BoW) vs. Term Frequency-Inverse Document Frequency (TF-IDF) vs. Chi Squared (CHI2)) for classifying Software Requirements into Functional Requirements (FR) and Non-Functional Requirements (NF), and the sub-classes of Non-Functional Requirements?" and "Which Machine Learning Algorithm provides the best performance for the requirements classification task?". The data used to perform the research was the PROMISE_exp, a recently made dataset that expands the already known PROMISE repository, a repository that contains labeled software requirements. All the documents from the database were cleaned with a set of normalization steps and the two feature extractions, and feature selection techniques used were BoW, TF-IDF and CHI2 respectively. https://www.selleckchem.com/ The algorithms used for classification were Logist Regression (LR), Support Vector Machine (SVM), Multinomial Naive Bayes (MNB) and k-Nearest Neighbors (kNN). The novelty of our work is the data used to perform the experiment, the details of the steps used to reproduce the classification, and the comparison between BoW, TF-IDF and CHI2 for this repository not having been covered by other studies. This work will serve as a reference for the software engineering community and will help other researchers to understand the requirement classification process. We noticed that the use of TF-IDF followed by the use of LR had a better classification result to differentiate requirements, with an F-measure of 0.91 in binary classification (tying with SVM in that case), 0.74 in NF classification and 0.78 in general classification. As future work we intend to compare more algorithms and new forms to improve the precision of our models.