text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: A Review of IS Strategy Literature: Current Trends and Future Opportunities Abstract: In this paper, we examine the IS strategy literature since 2008. Using Chen, Mocker, Preston, and Teubner's (2010) review as a starting point, we identified 31 IS strategy studies published since 2008. Further, we evaluated previous frameworks for evaluating IS strategy research. Using the conceptions proposed by Chen, et al. (2010), IS Strategy as competitive advantage (I), IS functional strategy (II), and strategic alignment (III), we identify overarching themes and propose there is increased activity bridging the conceptions, providing interesting opportunities for future research. Directions for future research for individual conception as well as bridging conceptions are given.
36,035
Title: Two combined methods for the global solution of implicit semilinear differential equations with the use of spectral projectors and Taylor expansions Abstract: Two combined numerical methods for solving implicit semilinear differential equations are obtained and their convergence is proved. The comparative analysis of these methods is carried out and conclusions about the effectiveness of their application in various situations are made. In comparison with other known methods, the obtained methods require weaker restrictions for the nonlinear part of the equation. Also, the obtained methods enable to compute approximate solutions of the equations on any given time interval and, therefore, enable to carry out the numerical analysis of global dynamics of the corresponding mathematical models. The examples demonstrating the capabilities of the developed methods are provided. To construct the methods we use the spectral projectors, Taylor expansions and finite differences. Since the used spectral projectors can be easily computed, to apply the methods it is not necessary to carry out additional analytical transformations.
36,043
Title: Microc alcification Segmentation Using Modified U-net Segmentation Network from Mammogram Images Abstract: Breast cancer is the most common aggressive cancer in women while the early detection of this cancer can reduce the aggressiveness. But it is challenging to identify breast cancer features such as micro-calcification from mammogram images by the human eye because of its size and appearance. Therefore, the automatic detection of micro-calcification is essential for diagnosis and proper treatment. This work introduces an automated approach and segments any micro-calcification in the mammogram images. At first, the preprocessing applications of images are applied to enhance the image. After that, the breast region is segmented from the pectoral region. The suspicious regions are detected using fuzzy C-means clustering algorithm and divided them into negative and positive patches. This procedure eliminates the manual labelling of the region of interest. The positive patches which contain micro-calcification pixels are taken to train a modified U-net segmentation network. Finally, the trained network is utilised to segment the micro-calcification area automatically from the mammogram images. This process can help as an assistant to the radiologist for early diagnosis and increase the segmentation accuracy of the micro-calcification regions. The proposed system is trained up with a Digital Database for Screening Mammography (DDSM), which is prepared by the University of South Florida, USA. We obtain 98.5% F-measure and 97.8% Dice score respectively. Besides, Jaccard index is 97.4%. The average accuracy of the proposed method is 98.2% which provides better performance than state-of-the-art methods. This work can be embedded with the real-time mammography system.
36,070
Title: Inference of the two-parameter Lindley distribution based on progressive type II censored data with random removals Abstract: In many practical problems relate to progressively Type-II censored sampling plans, not only an experiment process determines inevitably to use random removals but also a fixed removals assumption may be cumbersome to analyze some results of statistical inference. This paper investigates the estimation problem when lifetimes are the two-parameter Lindley distributed and are collected under two removal patterns based on the uniform discrete distribution and the binomial distribution. The maximum likelihood estimations (MLEs) of parameters are obtained by a derivative-free optimization method and without applying the logarithm of the likelihood function. Furthermore, we propose a method for starting values of optimization to obtain the MLEs and compare numerically their bias, variance, covariance and mean squared error under the two different removal plans. Then, the expected times are discussed and compared numerically under the two approaches of generating random removals. Finally, the optimal progressive Type-II censoring scheme is provided based on the measure of the smallest expected experiment time.
36,129
Title: The effect of game-based immersive virtual reality learning environment on learning outcomes: designing an intrinsic integrated educational game for pre-class learning Abstract: As an emerging learning platform, game-based immersive virtual reality learning environments (GIVRLEs) have the potential to solve difficult teaching problems. This study designed a GIVRLE by integrating knowledge of quadratic functions into gameplay. Forty seventh graders who had never acquired that knowledge played the game and took pre- and posttests. An additional 60 seventh graders took the same math tests as controls. The results showed significant improvements in math achievement and learning motivation between the pre- and posttests among students who played the game. No enhancement of math achievement was found in the control students. The playability survey and user experience questionnaire verified the suitability of the game. The findings indicate that a GIVRLE is a suitable tool for addressing teaching difficulties in K-12. The notion of intrinsic integration between learning content and gameplay based on simulated daily activity tasks is further discussed.
36,214
Title: Enhanced Zone-Based Energy Aware Data Collection Protocol for WSNs (E-ZEAL) Abstract: In the era of IoT, the energy consumption of sensor nodes in WSN is one of the main challenges. It is crucial to reduce energy consumption due to the limited battery life of the sensor nodes. Recently, Zone-based Energy-Aware data coLlection (ZEAL) routing protocol is proposed to improve energy consumption and data delivery. In this paper, an enhancement to ZEAL is proposed to improve WSN performance in terms of energy consumption and data delivery. Enhanced ZEAL (E-ZEAL) applies the K-means clustering algorithm to find the optimal path for the mobile-sink node. As well, it provides better selections for sub-sink nodes. The experiments are performed using the ns-3 simulator. The performance of E-ZEAL is compared to ZEAL. E-ZEAL reduces the number of hops and distance by more than 50%, Leading to speed up the data-collection phase by more than 30% with complete delivery of data. Moreover, E-ZEAL improves the lifetime of the network by 30%.
36,235
Title: THE CROSSING NUMBER OF HEXAGONAL GRAPH H-3,H-n IN THE PROJECTIVE PLANE Abstract: Thomassen described all (except finitely many) regular tilings of the torus S-1 and the Klein bottle N-2 into (3,6)-tilings, (4,4)-tilings and (6,3)-tilings. Many researchers made great efforts to investigate the crossing number of the Cartesian product of an m-cycle and an n-cycle, which is a special kind of (4,4)-tilings, either in the plane or in the projective plane. In this paper we study the crossing number of the hexagonal graph H-3(,n) (n >= 2), which is a special kind of (3,6)-tilings, in the projective plane, and prove that cr(N1)(H-3,H-n) = {0, n=2, n-1, n >= 3.
36,245
Title: Packing Trees in Complete Bipartite Graphs Abstract: An embedding of a graph H in a graph G is an injection (i.e., a one-to-one function) sigma from the vertices of H to the vertices of G such that sigma(x)sigma(y) is an edge of G for all edges xy of H. The image of H in G under sigma is denoted by sigma(H). A k-packing of a graph H in a graph G is a sequence (sigma(1), sigma(2), horizontal ellipsis , sigma(k)) of embeddings of H in G such that sigma(1)(H), sigma(2)(H), horizontal ellipsis , sigma(k)(H) are edge disjoint. We prove that for any tree T of order n, there is a 4-packing of T in a complete bipartite graph of order at most n + 12.
36,452
Title: Bounds on the Number of Edges of Edge-Minimal, Edge-Maximal and l-Hypertrees (vol 36, pg 259, 2016) Abstract: In this corrigendum, we correct the proof of Theorem 10 from our paper titled Bounds on the number of edges of edge-minimal, edge-maximal and l-hypertrees".
36,590
Title: A random forest-based job shop rescheduling decision model with machine failures Abstract: Machine failures are the common disturbances in production scheduling, whose appearances are generally random and uncertain. Rescheduling strategies have been proposed to deal with them. However, performances of these rescheduling strategies depend on status of machine failures, and there is no single strategy for every failure status. Hence, how to select the optimal strategy intelligently when a machine failure occurs becomes an important issue. Since the development of artificial intelligence (AI) and machine learning (ML) techniques, intelligent rescheduling has become possible. In this paper, we propose a new rescheduling decision model based on random forest, an effective machine learning method, to learn the optimal rescheduling strategy in different machine failures. We adopt a genetic algorithm (GA) to generate an initial scheduling scheme. Then we design simulation experiments to obtain data of different machine failures which could influence the initial scheme. In each machine failure, all rescheduling strategies are executed respectively and their performances are evaluated based on delay and deviation, then the best strategy is selected as a label. The random forest is trained based on these data samples with labels. Thus the internal mechanism between machine failures and rescheduling strategies can be learned. We conduct experiments to verify the effectiveness of this proposed method and the results show that accuracy can be as high as 97%. Moreover, compared with decision tree (DT) and support vector machine (SVM), the proposed method illustrates the best performance.
36,601
Title: ON HAMILTONIAN CYCLES IN CLAW-FREE CUBIC GRAPHS Abstract: We show that every claw-free cubic graph of order n at least 8 has at most 2[n/4] Hamiltonian cycles, and we also characterize all extremal graphs.
36,627
Title: An optimal prediction in stationary random fields based on a new interpolation approach Abstract: A new solution for the important problem of estimating (interpolating) the missing values of a second-order stationary random fields is given. It is obtained as an appropriate linear combination of the backward and forward optimal predictors. A necessary and sufficient condition for this interpolator, referred to as suboptimal interpolator, is given to be optimal. An illustration by an example and simulation study is presented.
36,643
Title: Key aspects of covert networks data collection: Problems, challenges, and opportunities Abstract: •Systematic data collection on covert networks is advocated.•Six key aspects of covert network data collection are identified.•Problems with secondary and missing data are discussed.•Biographies, graph databases, and checklists are proposed as starting solutions.
36,738
Title: Optimized Bonferroni approximations of distributionally robust joint chance constraints Abstract: A distributionally robust joint chance constraint involves a set of uncertain linear inequalities which can be violated up to a given probability threshold epsilon, over a given family of probability distributions of the uncertain parameters. A conservative approximation of a joint chance constraint, often referred to as a Bonferroni approximation, uses the union bound to approximate the joint chance constraint by a system of single chance constraints, one for each original uncertain constraint, for a fixed choice of violation probabilities of the single chance constraints such that their sum does not exceed epsilon. It has been shown that, under various settings, a distributionally robust single chance constraint admits a deterministic convex reformulation. Thus the Bonferroni approximation approach can be used to build convex approximations of distributionally robust joint chance constraints. In this paper we consider an optimized version of Bonferroni approximation where the violation probabilities of the individual single chance constraints are design variables rather than fixed a priori. We show that such an optimized Bonferroni approximation of a distributionally robust joint chance constraint is exact when the uncertainties are separable across the individual inequalities, i.e., each uncertain constraint involves a different set of uncertain parameters and corresponding distribution families. Unfortunately, the optimized Bonferroni approximation leads to NP-hard optimization problems even in settings where the usual Bonferroni approximation is tractable. When the distribution family is specified by moments or by marginal distributions, we derive various sufficient conditions under which the optimized Bonferroni approximation is convex and tractable. We also show that for moment based distribution families and binary decision variables, the optimized Bonferroni approximation can be reformulated as a mixed integer second-order conic set. Finally, we demonstrate how our results can be used to derive a convex reformulation of a distributionally robust joint chance constraint with a specific nonseparable distribution family.
36,756
Title: A simulation model to investigate impacts of facilitating quality data within organic fresh food supply chains Abstract: Demand for and production of organic fresh food play an increasing role worldwide. As a result, a growing amount of fresh fruits and vegetables has to be transported from predominantly rural production regions to customers mostly located in urban ones. Specific handling and storage conditions need to be respected along the entire supply chain to maintain high quality and product value. To support organic food logistics operations, this work investigates benefits of facilitating real-time product data along delivery and storage processes. By the development of a simulation-based decision support system, sustainable deliveries of organic food from farms to retail stores are investigated. Generic keeping quality models are integrated to observe impacts of varying storage temperatures on food quality and losses over time. Computational experiments study a regional supply chain of organic strawberries in Lower Austria and Vienna. Results indicate that the consideration of shelf life data in supply chain decisions allow one to reduce food losses and further enables shifting surplus inventory to alternative distribution channels.
36,794
Title: Self-efficacy and problem-solving skills in mathematics: the effect of instruction-based dynamic versus static visualization Abstract: This study explores the self-efficacy and problem-solving skills of middle school mathematics students. The students - 111 9th graders who were studying a unit for the analysis of function - were given mathematics instruction that was based on either dynamic or static visualization. Findings revealed a positive impact of instruction that was based on dynamic visualization that involved the use of the technological GeoGebra application, compared to instruction that was based on static visualization. The students who were exposed to dynamic visualization instruction displayed high levels of mathematics self-efficacy in real time. Improvement in the mathematics problem-solving skills of these students was shown both immediately after the intervention and three months later, demonstrating better conceptual and procedural understanding. The findings imply that exposure to instruction-based dynamic visualization contributed to closing both the affective and cognitive gaps between high and low achievers. The study offers a significant contribution to theoretical and methodological aspects, and provides practical understanding of instruction-based dynamic visualization about performance of mathematics students in both the affective and cognitive domains.
36,839
Title: Estimation of interquartile range in stratified sampling under non-linear cost function Abstract: Using stratified random sampling scheme, we discuss three estimators in estimating the finite population interquartile range (IQR) by using the known values of the 1(st) and 3(rd) quartiles of the auxiliary variable. Expressions for bias and MSE are derived up to first order of approximation. We propose an allocation procedure where a non-linear cost function is considered to minimize the mean square error (MSE) of the estimators for a specified cost. We use the real data set to compare the performances of estimators under proportional allocation and proposed allocation.
36,856
Title: A multi-stage stochastic programming model of lot-sizing and scheduling problems with machine eligibilities and sequence-dependent setups Abstract: We focus on the lot-sizing and scheduling problem with the additional considerations of machine eligibility, sequence-dependent setups, and uncertain demands. Multi-stage stochastic programming is proposed. We analyze the problem structure and suggest ways for modeling and solving large-scale stochastic integer programs. The analysis compares deterministic and stochastic model solutions to assess demand variance effects under the circumstances of increasing, fluctuating, and decreasing demands. The result shows that the expected cost performance of the stochastic programming model outperforms that of the deterministic model, in particular, when the demand is highly uncertain in the circumstance of an upward market trend. Our study can apply to the wafer fab manufacturing and other industries that heavily restricted by machine eligibility and demand uncertainties.
36,858
Title: Optimal container resource allocation in cloud architecture: A new hybrid model Abstract: A huge variety of fields and industries depend upon cloud computing based microservice due to its high-performance capability. Also, the merit of container usage is enormous; it enable larger portability, easier and faster deployment and restricted overheads. However, the rapid evolution causes issues in terms of container automation and management, Till now, a number of research works has concentrated on solving the open issues in container automation and management. In fact, container resource allocation is the major key hole for cloud providers since it directly influences the resource consumption and system performance. In this manner, this paper introduces a new optimized container resource allocation model by proposing a new optimization concept. To make the possibility of optimal container resource allocation, a new hybridized algorithm is implanted; namely, Whale Random update assisted Lion Algorithm (WR-LA), which is the hybrid form of Lion Algorithm (LA) and Whale Optimization Algorithm (WOA) is introduced. Moreover, the solution of optimized resource allocation is made by considering objectives like Threshold Distance, Balanced Cluster Use, System Failure, and Total Network Distance, respectively. Finally, the performance of the proposed model is compared over other conventional models and proves its superiority.
36,903
Title: Profiling analysis of DISC personality traits based on Twitter posts in Bahasa Indonesia Abstract: Data in the timeline of social media users consists of data in the form of text, images, audio, and video. Large and unstructured data in social media can be processed using various techniques such as text processing or image processing. In this study, the processed text data is used to classify Twitter users’ personality based on the DISC framework. Out of the initial collected 292 users, we semi-automatically filtered them for only personal accounts with Indonesian language posts. For being able to observe and assess a user’s personality out of their tweets choice of words, we made relevant keyword vocabularies corresponding to DISC framework and theory. There are four experiment scenarios done in this study, with variations on whether the keywords and text data are stemmed or not, and the keywords frequency calculation being weighted or not. Weighting the keywords using the current number in calculation based on their level does not show positive results, neither does stemming as the best results are shown by the not stemmed and not weighted scenario. This study is a preliminary research for an automatic profiling system which employs a combination of Natural Language Processing and Machine Learning approaches.
37,021
Title: Regression analysis with compositional data using orthogonal log-ratio coordinates Abstract: Compositional data frequently arise when data refer to components which are proportions or fractions of a whole. Within the log-ratio approach, the analysis of compositional data can be conducted in terms of log-ratio transformations of components. These transformations make it possible to overcome the problem of the constant-sum constraint, making standard statistical methods applicable. In the present work, the log-ratio approach based on orthogonal log-ratio coordinates is adopted to show how it can lead to considerable improvements in the interpretation of the results of regression modeling with compositional data, both as explanatory or response variables. In order to demonstrate its practical usefulness, the methodology presented in this paper is applied to the analysis of air pollution produced by vehicles traveling through road intersections, with a specific focus on the effect of the type of traffic control (traffic signal vs. roundabout) on CO2 emissions.
37,033
Title: An online learning system supporting student-generated explanations for questions: design, development, and pedagogical potential Abstract: An online system leveraging self-explanation was developed. The theoretical basis and design principles guiding the development of the system were explicated. Four evaluation studies were conducted to assess the student-generated explanations component accompanying student-generated questions (SQG) and the embedded designs. The analyzed data revealed several important findings. First, SGQ complemented by student-generated explanations (as compared to SGQ alone) was regarded as promoting learning better by a sizeable majority of the participants, and its facilitating effects on cognitive and affective aspects were noted. Second, the fact that the exact same set of cognitive and affective gains and similar patterns in the spread of responses were found from the explanation-generation for self- and peer-generated questions provided preliminary evidence supporting the "manageable versatility" design principle. Third, the result indicating that a predominant percent of the participants supported online practice on SGQ complemented by student-generated explanations substantiated the "manageable integration" guiding principle. Finally, the finding that a significantly greater percentage of the participants felt multimedia-equipped explanations to be better for SGQ (as compared to text-based explanations), both cognitively and affectively, provided preliminary evidence supporting the effectiveness of the multimedia design in terms of attaining the multiple feedback functions.
37,084
Title: The Static Stability Of Support Factor-Based Rectangular Packings: An Assessment By Regression Analysis Abstract: This work presents new insights about how solutions of support factor-based rectangular packings behave in relation to their static stability. In particular, we address the constrained two-dimensional packing problem, for the solution of which is used a known integer linear programming model that positions items over a grid of points. The model has embedded constraints based on a support factor parameter that ensure a minimum support for the base of items. The solutions obtained from the model are then evaluated by a procedure that verifies the conditions for the static stability. Computational tests were performed on a large variety of randomly generated instances, and the outputs were assessed by means of regression analysis (linear and logistic). The results show which characteristics of the instances contribute directly and inversely to the probability of obtaining statically stable packing patterns. This outcome may be useful to guide the choice of support factor values in some practical contexts.
37,220
Title: A matheuristic for the 0-1 generalized quadratic multiple knapsack problem Abstract: In this study, we address the 0-1 generalized quadratic multiple Knapsack problem. We use a linearization technique of the existing mathematical model and we propose a new matheuristic that we called Matheuristic Variable Neighborhood Search combining variable neighborhood search with integer programing to solve the large sized instances. The matheuristic considers a local search technique with an adaptive perturbation mechanism to assign the classes to different knapsacks, and then once the assignment is identified, applies the IP to select the items to allocate to each knapsack. Experimental results obtained on a wide set of benchmark instances clearly show the competitiveness of the proposed approach compared to the best state-of-the-art solving techniques.
37,239
Title: Big data analytics for large-scale UAV-MBN in quantum networks using efficient hybrid GKM Abstract: A primary important aspect of wireless network domain is known to be secured communication as the signals are emphatically available and broadcasted through the air. Quantum wireless networks are a hot topic of research in recent years that made researchers to avail new techniques in this field. Big data is one of the major aspects that support large-scale quantum network applications. As sensor nodes have resource constraints, it is more complex when using the traditional techniques of key establishment that only used for stable communication networks. Group-related applications are also one of the important aspects in wireless network for sending and receiving messages from multiple users. Unfortunately, maintenance of security for the group-oriented protocols is major issue due to the frequent changes in membership. In this paper, a hybrid GKM with re-keying procedure is proposed to obtain a secure communication for the remote system of unmanned autonomous vehicle (UAV) with mobile backbone network (MBN). Here, the group controller is selected based on the simplified hybrid energy efficient distributed (HEED) protocol and honey key encryption algorithm is used by the key management center (KMC) for generating and distribution of key for the group's controller. The secure communication is explored using the key exchange mechanism between the nodes. The joining and leaving of a node from the group are initiated by the re-keying process. The simulation results explain the improved performance of the proposed hybrid method when compared with other existing techniques in terms of privacy level, energy, memory, and time consumption.
37,451
Title: A MIP-CP based approach for two- and three-dimensional cutting problems with staged guillotine cuts Abstract: This work presents guillotine constraints for two- and three-dimensional cutting problems. These problems look for a subset of rectangular items of maximum value that can be cut from a single rectangular container. Guillotine constraints seek to ensure that items are arranged in such a way that cuts from one edge of the container to the opposite edge completely separate them. In particular, we consider the possibility of 2, 3, and 4 cutting stages in a predefined sequence. These constraints are considered within a two-level iterative approach that combines the resolution of integer linear programming and constraint programming models. Experiments with instances of the literature are carried out, and the results show that the proposed approach can solve in less than 500 s approximately 60% and 50% of the instances for the two- and three-dimensional cases, respectively. For the two-dimensional case, in comparison with the recent literature, it was possible to improve the upper bound for 16% of the instances.
37,575
Title: On the Optimality of 3-Restricted Arc Connectivity for Digraphs and Bipartite Digraphs Abstract: Let D be a strong digraph. An arc subset S is a k-restricted arc cut of D if D - S has a strong component D-' with order at least k such that D\V (D-') contains a connected subdigraph with order at least k. If such a k-restricted arc cut exists in D, then D is called lambda(k)-connected. For a lambda(k)-connected digraph D, the k-restricted arc connectivity, denoted by lambda(k)(D), is the minimum cardinality over all k-restricted arc cuts of D. It is known that for many digraphs lambda(k)(D) <= xi(k)(D), where xi(k)(D) denotes the minimum k-degree of D. D is called lambda(k)-optimal if lambda(k)(D) = xi(k)(D). In this paper, we will give some sufficient conditions for digraphs and bipartite digraphs to be lambda(3)-optimal.
37,613
Title: Multi-model LSTM-based convolutional neural networks for detection of apple diseases and pests Abstract: In this paper, we proposed Multi-model LSTM-based Pre-trained Convolutional Neural Networks (MLP-CNNs) as an ensemble majority voting classifier for the detection of plant diseases and pests. The proposed hybrid model is based on the combination of LSTM network with pre-trained CNN models. Specifically, in transfer learning, we adopted deep feature extraction from various fully connected layers of these pre-trained deep models. AlexNet, GoogleNet and DenseNet201 models are used in this work for feature extraction. The extracted deep features are then fed into the LSTM layer in order to construct a robust hybrid model for apple disease and pest detection. Later, the output predictions of three LSTM layers determined the class labels of the input images by majority voting classifier. In addition, we use an automatic scheme for determining the best choice of the network parameters of the LSTM layer. The experiments are carried out using data consisting of real-time apple disease and pest images from Turkey and the accuracy rates are calculated for performance evaluation. The experimental results show that by using the proposed ensemble combination structure, the results are comparable to, or better than, the pre-trained deep architectures.
37,626
Title: Adaptive user-oriented fuzzy-based service broker for cloud services Abstract: This paper presents an adaptive fuzzy-based cloud service brokering algorithm (AFBSB). The proposed algorithm employs an adaptive fuzzy-based engine to select the most appropriate data center for user cloud service requests considering user preferences in terms of cost and performance. The algorithm is implemented using an open-source cloud computing simulation tool. The algorithm results are tested against the results of other existing techniques within two types of cloud environments. First, a performance-constrained environment in which performance improvement is the main objective for cloud users. Second, a preference-aware environment, in which cloud users have different cost and performance preferences. Simulation results show that the proposed algorithm can achieve significant performance improvement in performance-constrained environments. As for the preference-aware environment, they show that AFBSB can operate in user-oriented manner that guarantees performance/cost improvement, as compared to other algorithms.
37,676
Title: Jackknife method for the location of gross errors in weighted total least squares Abstract: Because the weighted total least squares (WTLS) method lacks robustness and is sensitive to gross errors, it cannot eliminate the influence of outliers effectively. A small number of gross errors may produce a devastating effect on estimates. Focusing on the limitation of the WTLS method regarding the influence of gross errors, this work combines Jackknife resampling theory with the WTLS algorithm for the identification and detection of outliers. The gross errors located in the WTLS method by the Jackknife method are identified to further improve the qualities of the estimated values if the observation data are falsified by outliers. This paper focuses on the following two aspects: only one gross error and multiple gross errors. Detailed calculation steps and the whole procedure for outlier detection using the new method are given. This algorithm is applied to the straight-line fitting model and the plane coordinate transformation model. From the experimental estimation results, we can see that the method proposed in this paper can identify gross errors that are greater than or equal to three times the standard error, and obtain more accurate estimation values when compared with the WTLS method and the classic robust weighted total least squares (RWTLS) method. The numerical case studies verify the effectiveness and practicality of the proposed procedure.
37,691
Title: Toward energy-efficient data management design for sustainable cities and societies Abstract: Demanding continues communication and diverse interaction between various devices is witnessed over Internet as we probe into Internet of Things (IoT). The sustainable cities are designed based on the concept of IoT. IoT devices consume enormous energy continuously, which needs to be managed efficiently. The efficiency of energy is based on the optimization of the energy infrastructures. In addition, heterogeneous devices in sustainable cities produce enormous data. This huge data needs to be processed and analyzed along with smart energy management to achieve smart decisions. The planning is starting to be realistic through the quantity of data provided in sustainable cities. In this porch, the objective is to illustrate the data generated in sustainable cities in real time. To deal with the aforementioned requirements, this work demonstrates a novel architecture that spotlights the ecology of sustainable cities comprised of sensors, cameras, and other objects along with energy management (eg, Internet of Energy). The proposed system is a layered architecture composed of data collection and energy management, data computation, and decision-making layers. Energy-efficient clustering algorithm and optimized sleeping scheduling methods are utilized in the first layer to collect data from IoT devices with regard to energy management. The second layer is responsible for computation of data resourcefully along with energy efficiency. The third layer is used to provide valuable insights and make intelligent decisions. The architecture is verified with reliable datasets related to smart parking using IoT devices to test and reveal the effectiveness. The assessments disclose that the proposed scheme presents precious insights in the context of energy efficiency in sustainable cities and societies.
37,740
Title: Most stringent test of null of cointegration: a Monte Carlo comparison Abstract: To test for the existence of long run relationship, a variety of null of cointegration tests have been developed in literature. This study is aimed at comparing these tests on basis of size and power using stringency criterion: a robust technique for comparison of tests as it provides with a single number representing the maximum difference between a test's power and maximum possible power in the entire parameter space. It is found that in general, asymptotic critical values tends to produce size distortion and size of test is controlled when simulated critical values are used. The simple LM test based on KPSS statistic is the most stringent test at all sample sizes for all three specifications of deterministic component, as it has the maximum difference approaching to zero and lesser than 20% for the entire parameter space.
37,792
Title: Certify or not? An analysis of organic food supply chain with competing suppliers Abstract: Customers expect companies to provide clear health-related information for the products they purchase in a big data environment. Organic food is data-enabled with the organic label, but the certification cost discourages small-scale suppliers from certifying their product. This lack of a label means that product that satisfies the organic standard is regarded as conventional product. By considering the trade-off between the profit gained from organic label and additional costs of certification, this paper investigates an organic food supply chain where a leading retailer procures from two suppliers with different brands. Customers care about both the brand-value and quality (more specifically, if food is organic or not) when purchasing the product. We explore the organic certification and wholesale pricing strategies for suppliers, and the supplier selection and retail pricing strategies for the retailer. We find that when two suppliers adopt asymmetric certification strategy, the retailer tends to procure the product with organic label. The supplier without a brand name can compensate with organic certification, which leads to more profits than the branded rival. As the risk of being abandoned by the retailer increases, the supplier without a brand name is more eager than the rival to obtain the organic label. If both suppliers certify the product, however, they will fall into a prisoner’s dilemma under situation with low health utility from organic label and high certification cost.
37,895
Title: On finding optimum commuting path in a road network: A computational approach for smart city traveling Abstract: Commuting in big cities with heavy traffic is a real-world task faced by many on a daily basis. Finding a suitable path for commuting in real-life complex traffic networks is an important research problem with many applications. The existing work in this domain is based on the travel time and distance from source to destination. However, other than these two factors, there are many additional features that impact the overall travel time and its quality. Some of these additional features include environmental factors, road condition, and the traffic flow. The driving time can be minimized by selecting the most suitable path where there is less congestion and other travel related conditions are favorable. Commuting duration can increase even on the shortest path if there is congestion or the route is blocked. This work presents a mobile crowdsourcing-based model to find suitable commuting path(s) by considering the factors that directly or indirectly influence the overall travel time. Experiment in this work refers the naturalistic driving study to select the travel related features. An algorithm is proposed to find the suitable path from the user provided source to the destination using crowdsourced data generated using mobile application. Unlike other algorithms, the proposed approach can address the network peculiarities where travel cost is not only based on the distance between the nodes but other indirect factors are also involved. This work extracts all possible paths from a source to the destination and then computes the travel cost in terms of distance and satellite factors across the paths. This proposal is evaluated on eight large real-world road network data sets. A comparison is performed with four state-of-the-art pathfinding methods. These include, Floyd-Warshall algorithm, Bellman-Ford algorithm, open shortest path first algorithm, and Dijkstra algorithm. Empirical analysis shows that the additional factors incorporated in the proposed mobile crowdsourcing model while finding a suitable path have a significant impact on the travel time. The results show better performance of the proposed model than its counterparts.
37,901
Title: A verifiable hidden policy CP-ABE with decryption testing scheme and its application in VANET Abstract: As a new kind of cryptographic primitive, attribute-based encryption (ABE) is widely used in various complex scenarios because it has the characteristic of access control while encrypting messages. However, the existing ciphertext policy ABE (CP-ABE) encryption schemes have some inherent defects, such as the lack of privacy-preserving and inefficient decryption. These weak points make them difficult to play a role in scenarios with high real-time and high data confidentiality requirements, for example, vehicle ad hoc network (VANET). Therefore, we proposed a verifiable hidden policy CP-ABE with decryption testing scheme (or VHPDT, for short). It has the following features: hidden policy to privacy-preserving and outsourced decryption testing can verify the correctness of the decrypted result. Furthermore, we apply it into VANET.
37,913
Title: ICBayes: a package of Bayesian semiparametric regression for intervel-censored data Abstract: Interval-censored data arise frequently in medical studies of diseases that require periodic examinations for symptoms of interest, such as disease-free survival (DFS). This paper provides an introduction to the R package ICBayes which implements a set of programs for analyzing case 1 interval-censored data and case 2 interval-censored data under Bayesian semiparametric framework. The main function ICBayes fits commonly-used survival regression models: proportional hazards, proportional odds, and probit. A simulation study is conducted to compare the performance of the package with two R packages that fit Bayesian proportional hazards, proportional odds, and accelerated failure time models. The use of the package is illustrated through analyzing a case 2 interval-censored breast cosmesis data and a case 1 interval-censored animal tumorigenicity data.
37,925
Title: Cooperative Output Tracking of Unknown Heterogeneous Linear Systems by Distributed Event-Triggered Adaptive Control Abstract: This article addresses the cooperative output tracking problem of a class of linear minimum-phase multiagent systems, where the agent dynamics are unknown and heterogeneous. A distributed event-triggered model reference adaptive control strategy is developed. It is shown that under the proposed event-triggered control strategy, the outputs of all the agents synchronize to the output of the leader ...
41,667
Title: Hierarchical Granular Computing-Based Model and Its Reinforcement Structural Learning for Construction of Long-Term Prediction Intervals Abstract: As one of the most essential sources of energy, byproduct gas plays a pivotal role in the steel industry, for which the flow tendency is generally regarded as the guidance for planning and scheduling in real production. In order to obtain the numeric estimation along with its reliability, the construction of prediction intervals (PIs) is highly demanded by any practical applications as well as bei...
42,380
Title: Fast Approximation of Coherence for Second-Order Noisy Consensus Networks Abstract: It has been recently established that for second-order consensus dynamics with additive noise, the performance measures, including the vertex coherence and network coherence defined, respectively, as the steady-state variance of the deviation of each vertex state from the average and the average steady-state variance of the system, are closely related to the biharmonic distances. However, direct c...
42,385
Title: 3-D Human Pose Estimation Using Iterative Conditional Squeeze and Excitation Networks Abstract: We propose a new method for single-camera real-world 3-D human pose estimation. Our method uses multitask training together with iterative pose refinement using a novel conditional attention mechanism. For iterative pose refinement, the output of each convolutional layer is conditioned on the latest pose estimate, using a conditioned squeeze-and-excitation network architecture that incorporates no...
44,109
Title: A Refined 3-in-1 Fused Protein Similarity Measure: Application in Threshold-Free Hub Detection Abstract: AbstractAn exhaustive literature survey shows that finding protein/gene similarity is an important step towards solving widespread bioinformatics problems, such as predicting protein-protein interactions, analyzing Protein-Protein Interaction Networks (PPINs), gene prioritization, and disease gene/protein detection. In this article, we have proposed an improved 3-in-1 fused protein similarity measure called FuSim-II. It is built upon combining the weighted average of biological knowledge extracted from three potential genomic/ proteomic resources such as Gene Ontology (GO), PPIN, and protein sequence. Furthermore, we have shown the application of the proposed measure in detecting potential hub-proteins from a given PPIN. Aiming that, we have proposed a multi-objective clustering-based protein hub detection framework with FuSim-II working as the underlying proximity measure. The PPINs of H. Sapiens and M. Musculus organisms are chosen for experimental purposes. Unlike most of the existing hub-detection methods, the proposed technique does not require to follow any protein degree cut-off or threshold to define hubs. A thorough assessment of efficiency between proposed and existing eight protein similarity measures along with eight single/multi-objective clustering methods has been carried out. Internal cluster validity indices like Silhouette and Davies Bouldin (DB) are deployed to accomplish analytical study. Also, a comparative performance analysis between proposed and five existing hub-proteins detection algorithms is conducted through the enrichment of essentiality study. The reported results show the improved performance of FuSim-II over existing protein similarity measures in terms of identifying functionally related proteins as well as relevant hub-proteins. Supplementary material is available at http://csse.szu.edu.cn/staff/cuilz/eng/index.html.
46,380
Title: Genetic Programming for Instance Transfer Learning in Symbolic Regression Abstract: Transfer learning has attracted more attention in the machine-learning community recently. It aims to improve the learning performance on the domain of interest with the help of the knowledge acquired from a similar domain(s). However, there is only a limited number of research on tackling transfer learning in genetic programming for symbolic regression. This article attempts to fill this gap by p...
48,222
Title: Stabilization and Data-Rate Condition for Stability of Networked Control Systems With Denial-of-Service Attacks Abstract: This article investigates the stabilization control and stabilizing data-rate condition problems for networked control systems, which transmit signals from the sensor to the controller over the communication network with denial-of-service (DoS) attacks. Considering a class of DoS attacks that only constrain its frequency and duration, we aim to explore the constraint condition for stabilization an...
48,228
Title: A new way of crosscutting roles in set oriented programming Abstract: In the past twenty years, dozens of collaboration-based languages have emerged. Often they have an abstraction to denote a collaboration. Most of them use a form of single inheritance to build collaboration from another one. In this model, when a collaboration uses another one, every role class in the sub collaboration inherits from a role class in the super collaboration which has the same name. This affects the reusability of roles and collaborations and makes them semi interactive.
49,368
Title: An improved reliability model for FMEA using probabilistic linguistic term sets and TODIM method Abstract: Failure mode and effects analysis (FMEA) is known to be a proactive reliability analysis model broadly utilized to recognize and evaluate potential failure modes in various industries. The normal risk priority number (RPN) method, however, has suffers from a lot of criticisms, such as requirement of precise risk estimation, lack of scientific basis in computing RPN, and neglecting the weights of risk factors. Therefore, this paper devises a new FMEA model to evaluate and prioritize the risk of failure modes by integrating probabilistic linguistic term sets and TODIM (an acronym in Portuguese for interactive multi-criteria decision making) method. The probabilistic linguistic term sets are utilized to handle the intrinsic ambiguity existed in the risk assessments of FMEA team members, whilst an extended TODIM method is employed for determining the priority ranking of the individuated failure modes. Further, based on the technique for order of preference by similarity to ideal solution (TOPSIS), an objective weighting method is presented to derive the relative weights of risk factors. Finally, two illustrative examples are implemented and comparisons with other existing methods are performed to demonstrate the rationality and superiority of our proposed FMEA model.
49,397
Title: Modeling and optimization of biomass quality variability for decision support systems in biomass supply chains Abstract: A feasible alternative to the production of fossil fuels is the production of biofuels. In order to minimize the costs of producing biofuels, we developed a stochastic programming formulation that optimizes the inbound delivery of biomass. The proposed model captures the variability in the moisture and ash content in the biomass, which define its quality and affect the cost of biofuel. We propose a novel hub-and-spoke network to take advantage of the economies of scale in transportation and to minimize the effect of poor quality. The first-stage variables are the potential locations of depots and biorefineries, and the necessary unit trains to transport the biomass. The second-stage variables are the flow of biomass between the network nodes and the third-party bioethanol supply. A case study from Texas is presented. The numerical results show that the biomass quality changes the selected depot/biorefinery locations and conversion technology in the optimal network design. The cost due to poor biomass quality accounts for approximately 8.31 $$\%$$ of the investment and operational cost. Our proposed L-shaped with connectivity constraints approach outperforms the benchmark L-shaped method in terms of solution quality and computational effort by 0.6 $$\%$$ and 91.63 $$\%$$ on average, respectively.
49,400
Title: NROI based feature learning for automated tumor stage classification of pulmonary lung nodules using deep convolutional neural networks Abstract: Identifying the exact pulmonary nodule boundaries in computed tomography (CT) images are crucial tasks to computer-aided detection systems (CADx). Segregation of CT images as benign, malignant and non-cancerous is essential for early detection of lung cancers to improve survival rates. In this paper, a methodology for automated tumor stage classification of pulmonary lung nodules is proposed using an end-to-end learning Deep Convolutional Neural Network (DCNN). The images used in the study were acquired from the Lung Image Database Consortium and Infectious Disease Research Institute (LIDC-IDRI) public repository comprising of 1018 cases. Lung CT images with candidate nodules are segmented into a 52 × 52 pixel nodule region of interest (NROI) rectangle based on four radiologists’ annotations and markings with ground truth (GT) values. The approach aims in analyzing and extracting the self-learned salient features from the NROI consisting of differently structured nodules. DCNN are trained with NROI samples and are further classified according to the tumor patterns as non-cancerous, benign or malignant samples. Data augmentation and dropouts are used to avoid overfitting. The algorithm was compared with the state of art methods and traditional hand-crafted features like the statistical, texture and morphological behavior of lung CT images. A consistent improvement in the performance of the DCNN was observed using nodule grouped dataset and the classification accuracy of 97.8%, the specificity of 97.2%, the sensitivity of 97.1%, and area under the receiver operating characteristic curve (AUC) score of 0.9956 was achieved with reduced low false positives.
49,511
Title: Ensemble Neighborhood Search (ENS) for biclustering of gene expression microarray data and single cell RNA sequencing data Abstract: Background: Ensemble biclustering comprises a class of biclustering algorithms that generates a consensus, better-quality partition/s as output. This concept has emerged from the fusion of existing biclustering methods hybridized upon selected aspects. The design of the methodology enriches the existing methods furnishing with new properties. Usually biclustering of gene expression microarray data indulges in simultaneous clustering of the expression profiles under specific conditions and determines local twoway clustering models. In general, biclustering solutions rely upon different parameters like biclusters numbers, random initialization etc. However ensemble techniques are proposed to either reduce or eliminate the impact of such parameters on the output bicluster. Methods: In this paper, the authors propose a novel ensemble biclustering approach "Ensemble Neighborhood search (ENS)" based on the concept of neighborhood search. Simulation results verify that the proposed approach appears to be more flexible and adaptive in comparison to the existing competitive methods on high-dimensional gene expression microarray data as well as on scRNA-seq datasets. Conclusion: The performance of the proposed framework demonstrates its effectiveness with the other state-of-the-art schemes. The proposed framework is tested on five different microarray datasets and one single cell RNA sequence(scRNA-seq) dataset. Experimental results reveal that the proposed architecture achieves the prevention of unusual data loss and delivers the output refined as the per user standards. Also this framework preforms effectively on high sparsity scRNA-seq data where most of the algorithms fail to do so as these datasets contain massive zeros within. BicAT analysis of the ENS output validates ENS method as computationally effective and can be used to improve the quality of the biclusters. Finally, the results are statistically significant as shown in the ANOVA table. Hence this ENS method can be considered as a reliable framework and can be preferable over the traditional biclustering approaches to analyze the gene expression microarray data and high sparsity scRNA-seq data. The source code of the ENS algorithm can be accessed at https://github.com/c114002/Research/blob/master/ENS_ Code.zip.(c) 2019 The Authors. Published by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
49,563
Title: An improved class of robust ratio estimators by using the minimum covariance determinant estimation Abstract: In this article, ratio estimators for the population mean have suggested using the robust covariance estimation under the simple random scheme. Zaman and Bulut have developed a class of ratio-type estimators for the mean estimation by utilizing robust regression coefficients. In this paper, we extend the estimators presented in Zaman and Bulut by using minimum covariance determinant (MCD) robust covariance estimation. The mean square error (MSE) equation for the new estimators is obtained. These theoretical results are supported with the aid of numerical example and simulation based on dataset that include outliers.
49,616
Title: Mobile device security defense method and system based on address jump using sliding window technology Abstract: In order to solve the problem that IP address jump cannot be synchronized in real time due to the unstable speed of network data transmission in the Internet of Things system, in this paper, sliding window mechanism is introduced into network communication, and an Internet of Things system with network IP address jump function is constructed. The system adopts time synchronization scheme based on PTP protocol and IP address hopping strategy based on sliding window. The system includes three modules: time synchronization module based on PTP protocol to enhance security performance, address hopping module based on OpenFlow as core technology, and SDN routing module. The system can realize the security defense function of mobile devices on the Internet of Things. Finally, through the experimental test, it is found that, when the system time error is less than 18 s, the system's packet loss rate is stable and low. The anti-attack ability is very strong.
49,690
Title: Redesign Of Vaccine Distribution Networks Abstract: In most low- and middle-income countries supported by the World Health Organization's Expanded Program on Immunization, vaccines are distributed through a legacy medical supply chain that is typically not cost-efficient. Vaccines require storage and transport in a temperature-controlled environment; this requires a "cold" distribution chain with capacity constraints on cold storage and cold transport. We propose an approach to redesigning the vaccine distribution chain that includes locating a set of intermediate distribution centers (DCs) and determining the flow paths from the central store (where vaccines are received into a country) through one or more of these to health clinics where vaccination actually occurs. In addition, the transport vehicles to allocate to each flow path, and the cold storage devices to use at each clinic or intermediate DC are determined. The redesigned network does not have to follow the current four-tiered, arborescent structure commonly found in practice, but can use alternative network structures. To redesign this network optimally, we develop a mixed-integer programming (MIP) model that can be used for small-to-medium-sized problems and also present a hybrid heuristic-MIP method to obtain good solutions for larger problems. Numerical results are shown using data reflecting distribution networks in several countries in sub-Saharan Africa.
49,817
Title: Anticipated backward stochastic differential equations and their applications to zero-sum stochastic differential games Abstract: In this article, we are concerned with the zero-sum stochastic differential game problems with the cost functional depending not only on but also on The main approach is the anticipated backward stochastic differential equations under weaker conditions. For these equations, we prove the existence and uniqueness theorem and the comparison theorems. Then, by using these techniques, we get the saddle-point strategy for the zero-sum stochastic differential games when the Isaacs' condition holds.
49,822
Title: Modeling dependency between industry production and energy market via stochastic copula approach Abstract: In this article, the dependence structure between industrial production and energy markets is modeled via stochastic copula autoregressive approach. This model based on latent process is nonlinear and has time-varying parameters. The parameters that follow AR latent process are estimated by means of maximum likelihood-efficient importance sampling due to integral problem with high dimensions. It is found that dependence structures between industrial production and energy markets evolve over time. In addition, although relationships between industrial production and petrol price as well as dependency between industrial production and producer price do not affected by extraordinary events such as financial crisis, dependence structure between natural gas and industrial production is influenced by extreme events.
49,870
Title: Energy forecasting using multiheaded convolutional neural networks in efficient renewable energy resources equipped with energy storage system Abstract: Renewable energy resources (RERs) motivate electricity users to reduce their energy bills by taking benefit of self-generated green energy. Different studies have already pointed out that, because of the absence of proper technical support and awareness, the energy users were not able to sufficiently take paybacks from the RERs. However, with the commencement of smart grids, the potential benefits of RERs and dynamic pricing schemes can be exploited. Nonetheless, the big issue is the accurate prediction of energy produced by intermittent RERs. In this work, we have proposed an efficient framework by integrating energy storage system (ESS) and RERs with smart homes. This framework has shown significant results, which make it helpful and suitable for energy management at a community level. We applied a multiheaded convolutional neural network model for precise and accurate prediction of produced energy by RERs. Moreover, we have considered a smart community consisting of 80 homes. Simulation results prove that the proposed framework helps to decrease the energy bill of consumers by 58.32% and 63.02% through integration of RERs without and with ESS, respectively.
49,895
Title: Effective deep learning approaches for summarization of legal texts Abstract: The availability of legal judgment documents in digital form offers numerous opportunities for information extraction and application. Automatic summarization of these legal texts is a crucial and a challenging task due to the unusual structure and high complexity of these documents. Previous approaches in this direction have relied on huge labelled datasets, using hand engineered features, leveraging on domain knowledge and focussed their attention on a narrow sub-domain for increased effectiveness. In this paper, we propose simple generic techniques using neural network for the summarization task for Indian legal judgment documents. We explore two neural network architectures for this task utilizing the word and sentence embeddings for capturing the semantics. The main advantage of the proposed approaches is that they do not rely on hand crafted features, or domain specific knowledge, nor is their application restricted to a particular sub-domain thus making them suitable to be extended to other domains as well. We tackle the problem of unavailability of labelled data for the task by assigning classes/scores to sentences in the training set, based on their match with reference summary produced by humans. The experimental evaluations establish the effectiveness of our proposed approaches as compared with other baselines.
49,908
Title: Fuzzy modelling based energy aware clustering in wireless sensor networks using modified invasive weed optimization Abstract: Energy conservation in a highly resource restricted environment of Wireless Sensor Networks (WSNs) is a crucial task. This issue can be efficiently tackled by adopting the mechanism of clustering in sensor networks. The primary objective of any of the designed clustering protocols for WSNs is to obtain energy efficiency and to lengthen the network life duration. This achievement is made through the election of a single deserving node as a head node called cluster head. This paper suggested an evolutionary approach based clustering protocol named Modified-Invasive Weed Optimization Based Clustering Algorithm (M-IWOCA) for WSNs. M-IWOCA focuses on the selection of fittest node as a cluster head to enhance the network’s lifetime as well as to address the issue of energy conservation in sensor networks. A fuzzy inference model is designed to evaluate the fitness of each node in the network. Simulation results for M-IWOCA depict significant reduction in dead nodes per round and results in minimizing the energy utilization of the sensors. Moreover, M-IWOCA enhances the network stability period by 45% in comparison to Artificial Bee Colony (ABC) protocol and by 18% in comparison to Quantum Artificial Bee Colony (QABC) protocol.
49,919
Title: Enhancing science self-efficacy and attitudes of Pre-Service Teachers (PST) through a flipped classroom learning environment Abstract: The flipped instruction methodology has increased in popularity in recent years. A fruitful area for flipped methodology helps to increase the self-efficacy and attitudes of students as a learner in their science course. This study investigates the effects of following a flipped classroom teaching methodology on Pre-Service Teachers' (PSTs) self-efficacy in science contents and teaching science, as well as their attitudes toward science. Various instruments were used to assess the influence of the methodology on the aforementioned variables, and the results indicated that significant differences were observed in the students' self-efficacy before and after course completion. Additionally, the methodology followed in the class significantly increased positive attitudes toward science and scientific contents, and, therefore, PSTs were more willing to enjoy science. Thus, a flipped science course can contribute to attaining science self-efficacious PSTs with positive attitudes that are vital to accomplishing insights and visions for professional and specialized developments of PSTs.
49,921
Title: Towards a Cashless Society: The Imminent Role of Wearable Technology Abstract: Wearable payment, which is the use of wearable technology to make payment, is anticipated to be the future of proximity mobile payment. However, the acceptance and use of wearable payment in Malaysia leave much to be desired. Additionally, this area of research is currently under-addressed. Thus, this study looks into the elements that influence the intention to adopt wearable payment in Malaysia. The results are robustly insightful in revealing the intention to adopt wearable payment in Malaysia. Hence, there are numerous implications to wearable technology companies, wearable payment system designers, merchants and other stakeholders. Overall, this research is anticipated to serve as one of the pioneering research for future studies related to this area.
50,025
Title: A novel collaborative requirement prioritization approach to handle priority vagueness and inter-relationships Abstract: One of the challenging tasks of requirement engineering is to choose which among the set of requirements are more advantageous than others and ought to be viewed as first for execution. Years of research has established it is vital to capture, analyze and prioritize requirements, which to a great extent relies on acknowledgment acceptance of stakeholder’s requirements, concerns and criteria’s. Proposed method provides interactive support for eliciting stakeholders and developer’s initial ranking decision in collaborative manner. Intuitionistic Fuzzy Approach (IFS) is used to support stakeholders view; whereas analysis of inter relationships, requirement slicing and backtracking are used to support developers view using weighted page rank algorithm. Finally a ranking is generated in a collaborative manner to support requirement prioritization. The feasibility of the approach is illustrated through an experimental proof of concept by comparing results of proposed approach with state of art, Analytic Hierarchy Process (AHP) and Interactive Genetic Algorithm (IGA) prioritization techniques. Results of experimentation conclude that presented approach is capable of producing accurate and comparable results by handling technical constraints of dependency, tradeoffs of collaboration and scope of inclusion of initial multi-criteria preferences expressed as priority values to support reliability and robustness to errors.
50,066
Title: Further Results on Packing Related Parameters in Graphs Abstract: Given a graph G = (V, E), a set B subset of V (G) is a packing in G if the closed neighborhoods of every pair of distinct vertices in B are pairwise disjoint. The packing number rho(G) of G is the maximum cardinality of a packing in G. Similarly, open packing sets and open packing number are defined for a graph G by using open neighborhoods instead of closed ones. We give several results concerning the (open) packing number of graphs in this paper. For instance, several bounds on these packing parameters along with some Nordhaus-Gaddum inequalities are given. We characterize all graphs with equal packing and independence numbers and give the characterization of all graphs for which the packing number is equal to the independence number minus one. In addition, due to the close connection between the open packing and total domination numbers, we prove a new upper bound on the total domination number gamma(t)(T) for a tree T of order n >= 2 improving the upper bound gamma(t)(T) <= (n + s)/2 given by Chellali and Haynes in 2004, in which s is the number of support vertices of T.
50,174
Title: Two-stage sampling plan using process loss index under neutrosophic statistics Abstract: This article develops the designing of two-stage process loss using neutrosophic statistics (NS). The neutrosophic operating characteristic for the two-stage sampling is developed under NS. Neutrosophic plan parameters are obtained using non-linear optimization using the neutrosophic statistical interval method for the constraints. A comparative study is carried out with the existing scheme. Some tables are provided and explained with an example under uncertainty.
50,212
Title: A PBNM and economic incentive-based defensive mechanism against DDoS attacks Abstract: In this paper, a policy-based network management (PBNM) strategy has been proposed to defend DDoS attack along with maintaining Quality of Service (QoS) for legitimate users. During cautious or alert level users having a contract are only allowed to enter into the network. Moreover, PBNM has been used for enabling users to negotiate with the service provider dynamically on cost and types of services. Extensive experimentations have been carried out to check the validity of the proposed model. Experimentation has been performed in two phases. In first phase reachability condition will be checked on PN2 simulator and SPIN while in second phase, NS-2 is used to monitor the performance of the proposed model according to networking parameters. Results from implementation show the supremacy of the proposed model.
50,254
Title: Bootstrap inference for weighted nearest neighbors imputation Abstract: An attractive approach to fill in substitute values for missing data is imputation. A number of methods are available in literature that can be used for the imputation of the missing data. However, it is not advisable to treat the imputed data just as the complete data. To apply the existing methods for analyzing the data, for example, to estimate the variance and/or statistical inference will probably produce invalid results because these methods do not account for the uncertainty of imputations. In this article, we present analytic techniques for inference from a dataset in which missing values have been replaced by the nearest neighbors imputation method. A simple and easy to use bootstrap algorithm that combines the nearest neighbors imputation with bootstrap resampling estimation, to obtain valid bootstrap inferences in a linear regression model is suggested. More specifically, imputing bootstrap samples in the exact same way as original data was imputed produces correct bootstrap estimates. Simulation results show the performance of our approach in different data structures.
50,320
Title: Bayesian predictive analysis for Weibull-Pareto composite model with an application to insurance data Abstract: Aminzadeh and Deng, respectively provide Bayesian predictive models for Exponential-Pareto and Inverse Gamma-Pareto composite distributions which are one-parameter models. The purpose of this article is to develop an alternative Bayesian predictive model (two-parameter) which can be used to compute important risk measures that are not defined via the above predictive models. Bayesian predictive density for the Weibull-Pareto composite distribution is developed and is used to compute risk measures such as Value at Risk (VaR), Conditional Tail Expectation (CTE), Predictive Expectation (PE), Limited Predictive Expected value (LPE), Limited Predictive Variance (LPV), and Limited Predictive Tail-VaR (LPCTE). Accuracy of parameter estimates as well as the risk measures are assessed via simulation studies. It is shown that the informative Bayes estimates are consistently more accurate than ML and the non-informative Bayes estimates. Backtesting for the risk measures is performed and goodness-of-fit of Weibull-Pareto among other composite models to the Danish fire data is assessed.
50,367
Title: Testing the absence of random effects in the nested-error regression model using orthogonal transformations Abstract: Testing the absence of random effects in the nested error regression model is nonstandard because it requires the null value of the random effects variance to be on the boundary of its space. In this paper, we propose a test statistic that can be obtained by applying two successive orthogonal transformations to the response vector such that the components of the resulting new vector are uncorrelated variates having zero mean and a common variance. Consequently, two new tests are proposed. The first assumes the normality of the responses and is based on approximating the distribution of the test statistic as a weighted sum of chi-square variates. The second test is a distribution-free permutation test based on the same test statistic. Our simulation experiments indicate that both tests have empirical significance levels that are close to the chosen nominal level. The statistical power of the first test is studied analytically. Empirical power comparisons indicate that the first test outperforms recently developed tests. The second test also has a competitive performance among them. Using a real dataset, the use of the proposed tests is illustrated.
50,377
Title: The efficiency of run rules schemes for the multivariate coefficient of variation in short runs process Abstract: In real industries, multivariate process monitoring is crucial as there are many instances that involve at least two quality variables to be monitored simultaneously. The short runs process is commonly seen in production after the industry moved toward flexible manufacturing. Monitoring the coefficient of variation (CV) is useful in a wide variety of scientific areas. In a view of the importance in monitoring the CV and the fact that most real-life data in process monitoring are multivariate in nature, this paper proposes to monitor the multivariate CV in short runs process by means of run rules (RR) control charts. A Markov chain model is established for designing the proposed charts. The statistical performances of the RR multivariate CV (MCV) and Shewhart MCV (SH MCV) charts are compared in terms of the truncated average run length and the expected truncated average run length. The results show that the proposed charts surpass the SH MCV chart for detecting small and moderate multivariate CV shifts. The implementation of the RR MCV chart in the short runs process is illustrated with an example using a real dataset.
50,407
Title: Machine learning based on extended generalized linear model applied in mixture experiments Abstract: When performing mixture experiments, we observe that maximum likelihood methods present problems related to the collinearity, small sample size, and over/under dispersion. In order to overcome these problems, this investigation proposes a model built in accordance with a machine learning approach. This approach will be called Boosted Simplex Regression, which has been evaluated both in terms of accuracy and precision for the odds ratio. The advantages of this new approach are illustrated in a mixture experiment, which has made us conclude that the model Boosted Simplex Regression has unveiled not only better fit quality but also more precise odds ratio confidence intervals.
50,413
Title: Skin lesion segmentation and classification: A unified framework of deep neural network features fusion and selection Abstract: Automated skin lesion diagnosis from dermoscopic images is a difficult process due to several notable problems such as artefacts (hairs), irregularity, lesion shape, and irrelevant features extraction. These problems make the segmentation and classification process difficult. In this research, we proposed an optimized colour feature (OCF) of lesion segmentation and deep convolutional neural network (DCNN)-based skin lesion classification. A hybrid technique is proposed to remove the artefacts and improve the lesion contrast. Then, colour segmentation technique is presented known as OCFs. The OCF approach is further improved by an existing saliency approach, which is fused by a novel pixel-based method. A DCNN-9 model is implemented to extract deep features and fused with OCFs by a novel parallel fusion approach. After this, a normal distribution-based high-ranking feature selection technique is utilized to select the most robust features for classification. The suggested method is evaluated on ISBI series (2016, 2017, and 2018) datasets. The experiments are performed in two steps and achieved average segmentation accuracy of more than 90% on selected datasets. Moreover, the achieve classification accuracy of 92.1%, 96.5%, and 85.1%, respectively, on all three datasets shows that the presented method has remarkable performance.
50,529
Title: Analysis of a two echelon supply chain with merging suppliers, a storage area and a distribution center with parallel channels Abstract: This paper examines a discrete material two echelon supply chain system. Multiple reliable non identical merging suppliers send products to an intermediate storage area which can in turn feed a distribution centre with parallel identical reliable distribution channels. It is assumed that each merging supplier may have parallel identical reliable supply channels. The service rate of each supplier and each identical channel at the distribution centre is assumed to be exponentially distributed. The examined model is analyzed as a continuous time Markov process with discrete states. An algorithm that can create the system's transition probabilities matrix for any value of its parameters is presented and various performance measures are calculated. The comparison of the proposed method with simulation showed that the proposed algorithm provides very accurate estimations of the system's performance measures. Additionally the optimal values of the system's parameters to optimize its various performance measures are also explored thoroughly.
50,595
Title: Distinguishing luck from skill through statistical simulation: a case study Abstract: To investigate the perential question of how to measure luck versus skill, we perform a detailed simulation study of Texas Hold'em poker hands. We define luck and skill as the player equity changes from dealing cards and from player betting decisions, respectively. We find that a careful definition of player equity leads to measurements of luck and skill which satisfy the statistical properties that we should expect them to. We conclude that our definitions of luck versus skill appear to be valid and accurate in the context of poker hands, and perhaps beyond to issues of luck and skill in other aspects of modern society.
50,601
Title: Test shape constraints in semiparametric model with Bernstein polynomials Abstract: In this article, we mainly discuss hypothesis tests to assess whether nonparametric function satisfies monotonicity or convexity in semiparametric partially linear model. For this purpose, we propose a new test statistic for monotonicity or convexity of nonparametric function based on Bernstein polynomials. Furthermore, we employ likelihood ratio statistic to test the significance of regression parameters of the model. We discuss the asymptotic properties of the tests for both nonparametric function and regression parameters. A simulation study is conducted to evaluate the finite sample performance compared with the other methods for shape tests. The method is illustrated by the fuel efficiency study.
50,609
Title: Network Forensic Investigation Protocol to Identify True Origin of Cyber Crime Abstract: An increase in digitization is giving rise to cybercrimes. The existing network protocols are insufficient for collecting the required digital evidence of cybercrime, which eventually makes the process of forensic investigation difficult. In the current scenario of network forensics, the investigator with current capabilities can reach only up to the ISP. This is not primary evidence. Currently, available tools work only at the network layer. In this work, we propose a protocol that ensures tracking up to the true source by collecting beforehand forensically sound evidence. The proposed protocol can collect target data from the device in the form of a device fingerprint with the help of an agent process. The proposed methodology will help in proving non-repudiation, which is a well-known challenge in forensic cases. The fingerprint evidence generated by the proposed method has the capability of not getting obsolete even if the criminal tries to destroy evidence. The fingerprinting technique deployed uses a hash tree and generates evidence in such a way that this fingerprint can act as legal evidence. The security validation of the proposed system is done using the BAN logic. Formal verification is performed using the AVISPA tool. The system has been implemented as a prototype and hosted on AWS.
50,679
Title: b-Coloring of the Mycielskian of Some Classes of Graphs Abstract: The b-chromatic number b(G) of a graph G is the maximum k for which G has a proper vertex coloring using k colors such that each color class contains at least one vertex adjacent to a vertex of every other color class. In this paper, we have mainly investigated on the b-chromatic number of the Mycielskian of regular graphs. In particular, we have obtained the exact value of the b-chromatic number of the Mycielskian of some classes of graphs. This includes a few families of regular graphs, graphs with b(G) = 2 and split graphs. In addition, we have found bounds for the b-chromatic number of the Mycielskian of some more families of regular graphs in terms of the bchromatic number of their original graphs. Also we have found b-chromatic number of the generalized Mycielskian of some regular graphs.
50,699
Title: Analysis and research on intelligent manufacturing medical product design and intelligent hospital system dynamics based on machine learning under big data Abstract: In order to promote the application of machine learning in intelligent manufacturing, especially in medical products and intelligent hospitals, based on the theoretical research of machine learning under big data, intelligent medical disease diagnosis and classification products are designed. Two algorithms, MLP and SVM, are used to compare the accuracy, recall and F1. It is concluded that the effect of medical diagnosis classification of both methods is maintained above and below 90%. At the same time, the research proves that the corresponding accuracy can be improved only when the fuzzy matching is suitable for more positive examples.
50,714
Title: An Improved Selection Method Based On Crowded Comparison For Multi-Objective Optimization Problems In Intelligent Computing Abstract: The main method of dealing with multi-objective optimization problems (MOPs) is the improvements of non-dominated sorting genetic algorithm II (NSGA-II), which have obtained a great success for solving MOPs. It mainly uses a crowded comparison method (CCM) to select the suitable individuals for enter the next generation. However, the CCM requires to need calculate the crowding distance of each individual, which needs to sort the population according to each objective function and it exhausts a lot of computational burdens. To better deal with this problem, we proposes an improved crowded comparison method (ICCM), which combines CCM with the random selection method (RSM) based on the number of selected individuals. The RSM is an operator that randomly selects the suitable individuals for the next generation according to the number of needed individuals, which can reduce the computational burdens significantly. The performance of ICCM is tested on two different benchmark sets (the ZDT test set and the UF test set). The results show that ICCM can reduce the computational burdens by controlling two different selection methods (i.e., CCM and RSM).
50,763
Title: The workshop scheduling problems based on data mining and particle swarm optimisation algorithm in machine learning areas Abstract: The optimisation process and results are classified and stored to guide the future workshop scheduling and improve the retrieval efficiency. The results show that the random inertia weight strategy is added to the standard particle swarm optimisation (PSO) algorithm. The idea of crossover and mutation in genetic algorithm (GA) is introduced to increase the diversity of population and prevent it from falling into local optimal solution. Finally, the global optimal solution can be searched by using the strong ability of genetic algorithm to jump out of local optimal to ensure that population evolution is stagnated.
50,774
Title: Diverse classifiers ensemble based on GMDH-type neural network algorithm for binary classification Abstract: Group Method of Data Handling (GMDH) - type neural network algorithm is the heuristic self-organizing algorithm to model the sophisticated systems. In this study, we propose a new algorithm assembling different classifiers based on GMDH algorithm for binary classification. A Monte Carlo simulation study is conducted to compare diverse classifier ensemble based on GMDH (dce-GMDH) algorithm to the other well-known classifiers and to give recommendations for applied researchers on the selection of appropriate classifier under the different conditions. The simulation study illustrates the proposed approach is more successful than the other classifiers in classification in most scenarios generated under the different conditions. Our proposed method is compared to the other classifiers on Cleveland heart disease data. An implementation of the proposed approach is demonstrated on urine data. Moreover, the proposed algorithm is released under R package GMDH2 under the name of "dceGMDH" for implementation.
50,777
Title: Estimation and prediction based on type-I hybrid censored data from the Poisson-Exponential distribution Abstract: This paper considers the problems of estimation and prediction when lifetime data following Poisson-exponential distribution are observed under type-I hybrid censoring. For both the problems, we compute point and associated interval estimates under classical and Bayesian approaches. For point estimates in the problem of estimation, we compute maximum likelihood estimates using Newton-Raphson, Expectation-Maximization and Stochastic Expectation-Maximization algorithms under classical approach, and under Bayesian approach we compute Bayes estimates with the help of Lindley and importance sampling technique under informative and non-informative priors using symmetric and asymmetric loss functions. The associated interval estimates are obtained using the Fisher information matrix and Chen and Shao method respectively under classical and Bayesian approaches. Further, the predictive point estimates and associated predictive interval estimates are computed by making use of best unbiased and conditional median predictors under classical approach, and Bayesian predictive and associated Bayesian predictive interval estimates in the problem of prediction. We analysis real data set, and conduct Monte Carlo simulation study for the comparison of various proposed methods of estimation and prediction. Finally, a conclusion is given.
50,886
Title: The effect of aggregating multivariate count data using Poisson profiles Abstract: The effect of aggregating data has been studied in several situations, including some related to health surveillance, from which univariate count data are frequently observed. However, this effect has not been studied in more complex situations such as the annual aggregations of cases of several diseases, discriminated by the age of patients. In this article we study the effect of aggregation of multivariate count data that depend on a single covariate, in the context of Statistical Quality Control, using a methodology based on Poisson profile monitoring. More specifically, the relative performance of two monitoring schemes is evaluated in terms of their average time to signal (ATS) when aggregated and disaggregated information is used. We also study whether or not the aggregation effect is the same for the two schemes.
50,915
Title: Nonparametric prior elicitation for a binomial proportion Abstract: This paper proposes a nonparametric Bayesian approach based on a density estimation with an open unit interval (0,1) using binomial data. We propose a very efficient nonparametric Bayesian approach method to infer smooth density defined on (0,1) through the transformation of a random variable. For practical implementation, we provide the corresponding blocked Gibbs sampling procedure based on the stick-breaking representation. The greatest advantage of this method is that it does not require us to draw from the complete conditional posterior distribution using a Metropolis-Hastings transition probability because the proposed transformation leads to a pair of conjugate priors and likelihoods. The validity of the proposed method is assessed through simulated and real data analysis.
50,923
Title: An efficient certificateless public key encryption with equality test toward Internet of Vehicles Abstract: With the rapid development of the Internet of Vehicles, a great amount of vehicles are connected to Internet of Vehicles environment. These vehicles can exchange data to each other and upload the data to a cloud server for backup. Since the data contains some sensitive information, it is necessary to encrypt the data before uploading to the cloud server. However, this method severely hampers the utilization of data because of the tremendous difficulty in the search for encrypted data. For addressing the above issue, we propose a novel certificateless public key encryption supporting the equality test on ciphertexts that are performed by the cloud server while revealing nothing about the corresponding plaintexts. Our scheme has proved to be secure in bilinear Diffie-Hellman assumption under the model of random oracle. At the same time, both theoretical analysis and experimental evaluation indicate the security, efficiency, as well as effectiveness of the proposed scheme.
50,991
Title: The effect of cognitive and behavioral factors on student success in a bottleneck business statistics course via deeper analytics Abstract: In this article, we study a set of factors underlying student success in a bottleneck business course using statistical and data mining techniques. Factors included learning styles, motivational and other cognitive factors, personality traits, learning analytics, along with background demographic and academic ones. Our analysis yielded interesting insights that show some of these factors play significant roles in predicting both student performance and their propensity to utilize resources that help improve their performance, such as additional support services. The predictive accuracy of both of our models were over 95% (error rate <5%). Moreover, quantile regression models were used to determine factors that specifically affect the performance of low-performing students so that targeted intervention and support services can be developed specifically for them. In conclusion, deeper analytics via statistical models are crucial for forming an in-depth understanding of how to improve student performance in a bottleneck course and this has far-reaching implications for both educators and administrators in higher education.
51,035
Title: C-EEUC: a Cluster Routing Protocol for Coal Mine Wireless Sensor Network Based on Fog Computing and 5G. Abstract: In underground coal mine, the routing protocol in Wireless Sensor Network (WSN) based on fog computing can effectively achieve combination the monitoring task with the computing task, and provide the correct data forwarding path to meet the requirements of the aggregation and transmission of sensed information. However, the energy efficiency is still taken into account, especially, the unbalance of energy consumption. 5G is a technical system of high frequency and low frequency mixing, with characteristics of large capacity, low energy consumption and low cost. With the formal freeze on 5G NSA standards, 5G networks are one step closer to our lives. In this paper, a centralized non-uniform clustering routing protocol C-EEUC based on the residual energy and communication cost. The C-EEUC protocol considers all nodes as candidate cluster heads in the clustering stage and defines a weight matrix P. The value of the matrix elements takes into account the residual energy of nodes and the cost of communication between nodes and cluster heads, selected as the basis for the cluster head. When selecting a cluster head, each time a node with the largest weight is selected from a set of candidate cluster heads, other candidate cluster heads within the competition range abandon competition, and then updates the candidate cluster head set. Experimental results show that the protocol optimized in this paper can effectively extend the network life cycle.
51,092
Title: How system functionality improves the effectiveness of collaborative learning Abstract: The purpose of this study was to build on the existing research into the relationships between collaboration and germane cognitive load in online learning environments. Where this study differs is the examination into the importance of the system functionality of online environments and its relationship to germane cognitive load and collaboration. The research was performed using surveys of students who were studying at the Open Cyber University of Korea. The study found that the relationship between collaborative interaction and germane load was stronger when system functionality is high. It also supported existing evidence that well-designed online systems with high functionality benefit learners, as students who are in classes with higher quality systems report higher levels of germane cognitive load. This leads to the importance of improving both pedagogy and technology in online learning environments.
51,160
Title: New concepts of principal component analysis based on maximum separation of clusters Abstract: It is common to apply dimension reduction techniques like principal component analysis before performing cluster analysis of multivariate data. However, it is not guaranteed that the most useful information for separating different groups is concentrated in the first few principal components. To improve the performance of clustering, a novel nonparametric method is constructed by redefining the principal component based on the linear combination of attributes that maximizes a newly proposed measure of separation of clusters. An efficient dynamic programing algorithm with complexity is described, where n is the number of observations and p is the number of attributes. The applications of the proposed methods are discussed with examples in credit card issuance and privacy protection under randomized multiple response techniques.
51,185
Title: Simulation and application of subsampling for threshold autoregressive moving-average models Abstract: This article considers subsampling inference for threshold ARMA (TARMA) models. Of main interest is inference for the threshold parameter, for which it is known that the limiting distribution of the corresponding estimator is non-normal and very complicated. A simple approach based on a subsampling method is proposed to construct asymptotically valid confidence intervals for the threshold parameter. It is demonstrated that our method furnishes a valid inference for the threshold parameter. The performance of this method in finite samples is evaluated through simulation studies.
51,217
Title: Improved payload capacity in LSB image steganography uses dilated hybrid edge detection Abstract: This research proposed the dilated hybrid edge detection on the three most significant bits (MSB) pixels of cover images with the aim of expanding the edge area so as to increase the data embedding capacity in image steganography. This technique can perform extraction without the need to save edge detection on the original cover image. This is because edge detection is performed on 3-bits MSB images, whereas messages are embedded at x LSB and y LSB values, where x is the number of LSB bits replaced in the edge area and y is the number of bits replaced in the non-edge area, where x and y cannot reach 3-bits MSB, so the edge detection of the cover image and the stego image will be the same. Based on the results of tests carried out, the proposed steganography technique succeeded in improving the quality of imperceptibility, where the PSNR value increased by about 1 to 2 dB when compared to the previous method. Similarly, the message embedding capacity can be increased due to a wider edge area, with the same image dataset obtained an average increase of 4,350–12,364 pixels of the edge area, however, the quality of the stego image imperceptibility can be maintained.
51,249
Title: Topic sentiment analysis using words embeddings dependency in edge social system Abstract: The topic sentiment analysis is really fundamental in detecting potential cyber threats and cyber attacks in edge social systems. We can detect potential cyber threats and cyber attacks by identifying the sentiment orientation and topics of public opinion information in the edge social system. Topic sentiment joint model is an extended model, which aims to deal with the problem of detecting sentiments and topics simultaneously from the online comment. Most existing topic sentiment joint models ignore the dependency among words so that they lose rich semantic information and the resulting distribution might be not very satisfactory. In this paper, we propose a novel topic sentiment joint model with word embeddings dependency based on recurrent neural network. The model introduces the dependency among word embedding and delivers topic information and sentiment information of words by a recurrent neural network. It fully extends the semantic information and redefines the topic sentiment-word distribution. Moreover, we obtain more accurate topic detection and sentiment analysis. Experimental results on online review data set show that the proposed model significantly improves the sentiment classification accuracy and achieved better topic detection compared with previous methods.
51,259
Title: Hybrid Artificial Bee Colony and Monarchy Butterfly Optimization Algorithm (HABC-MBOA)-based cluster head selection for WSNs Abstract: Energy efficiency is considered as the most potential issue in Wireless Sensor Networks (WSNs), since they incorporate limited sized batteries that could not be recharged or replaced. The energy possessed by the sensor nodes needs to be optimally utilized in order to extend the lifetime expectancy with guaranteed QoS in the network. In this paper, a Hybrid Artificial Bee Colony and Monarchy Butterfly Optimization Algorithm (HABC-MBOA)-based Cluster Head Selection Scheme is proposed for the predominant selection of cluster heads under clustering process. This proposed HABC-MBOA replaces the employee bee phase of ABC with mutated butterfly adjusting operator of MBOA for preventing earlier trapping of solutions into a local optimal point and delayed convergence by maintaining the tradeoff between exploitation and exploration. This proposed HABC-MBOA plays an anchor role in eliminating inadequacy of ABC algorithm towards global search potential. This proposed HABC-MBOA also eliminates the possibility of cluster heads being overloaded with maximum number of sensor nodes, that results in rapid death of the sensor nodes during the deployment of impotent cluster head selection process. The simulation results confirmed that the number of alive nodes in the network is determined to be 18.92% superior to the benchmarked cluster head selection approaches.
51,349
Title: Search Under Accumulated Pressure. Abstract: The paper “Search Under Accumulated Pressure” explores how time pressure in the form of task accumulation might affect the information-gathering process of a decision maker. Decision makers often n...
51,391
Title: Railway timetabling with integrated passenger distribution Abstract: •We propose a novel timetabling approach with integrated passenger distribution model.•Two MILPs: one integrates a linear distribution, the other simulates the distribution.•We compare our models in experiments to state-of-the-art timetabling methods.•We show the effect of multiple/single route and integrated/predetermined routing.•Integrating a passenger distribution model can help to find better timetables.
51,430
Title: SingleCross-clustering: an algorithm for finding elongated clusters with automatic estimation of outliers and number of clusters Abstract: Many clustering methods perform well with spherical clusters but poorly with elongated clusters. The Single-linkage method is suitable for finding such type of long clusters, but it can be sensitive to outliers and noise in the data, causing the so-called chain-effect. This work proposes a modification of Cross-Clustering algorithm, the SingleCross-Clustering (SCC), a partial clustering algorithm that estimates the number of clusters, recognizes outliers and that is useful for the identification of elongated clusters. SCC has been validated by comparing it with a number of existing clustering methods, showing on both simulated and real datasets that SCC is a reliable solution for the identification of the correct number of clusters and of the number of clusters memberships. The algorithm has been implemented in the R package CrossClustering, which can be downloaded for free from the CRAN contributed package repository.
51,462
Title: Using elastic net restricted kernel canonical correlation analysis for cross-language information retrieval Abstract: Kernel methods, which are a non-linear variant of linear methods, are used to increase the flexibility and allow to examine non-linear relationships by linear methods. The conventional solution of the restricted kernel canonical correlation analysis problem has a major drawback, it solves the problem in a reasonable time frame only for problems with few variables. We successfully overcame this limitation by implementing the method with the alternating least-squares algorithm. This allowed us to apply the method to cross-language information retrieval problem on a big dataset. We compared the results to other established methods and they were encouraging.
51,479
Title: An anticrime information support system design: Application of K-means-VMD-BiGRU in the city of Chicago Abstract: The sharp rise in urban crime rates is becoming one of the most important issues of public security, affecting many aspects of social sustainability, such as employment, livelihood, health care, and education. Therefore, it is critical to develop a predictive model capable of identifying areas with high crime intensity and detecting trends of crime occurrence in such areas for the allocation of scarce resources and investment in the prevention and reduction of criminal strategies. This study develops a predictive model based on K-means clustering, signal decomposition technique, and neural networks to identify crime distribution in urban areas and accurately forecast the variation tendency of the number of crimes in each area. We find that the time series of the number of crimes in different areas show a correlation in the long term, but this long-term effect cannot be reflected in the short period. Therefore, we argue that short-term joint law enforcement has no theoretical basis because data show that spatial heterogeneity and time lag cannot be timely reflected in short-term prediction. By combining the temporal and spatial effects, a high-precision anticrime information support system is designed, which can help the police to implement more targeted crime prevention strategies at the micro level.
51,550
Title: Robust and lossless data privacy preservation: optimal key based data sanitization Abstract: Privacy-preserving data mining (PPDM) is the most significant approach on data security, in which more research work is under progress. This paper intends to propose a new PPDM model that includes two phases: data sanitization and restoration. Initially, association rules get extracted for proceeding the mentioned phases. The first and foremost tactic on the proposed privacy preservation model is the generation of the optimal key that is used to produce the sanitized data from the original data. The same key takes complete responsibility for processing the data restoration process by the receiver. As the key extraction plays a major role, this paper intends to propose a new hybrid algorithm; Trial based update on whale and particle swarm algorithm (TU-WPA) for selecting the optimal key. The proposed method is the combination of particle swarm optimization and whale optimization algorithm. More importantly, the research issues such as hiding failure rate, information preservation rate, false rule generation and degree of modification are minimized through the proposed sanitization and restoration processes. Finally, the performance of the proposed TU-WPA model is verified over other conventional models.
51,577
Title: Architecture and optimization of data mining modeling for visualization of knowledge extraction: Patient safety care Abstract: Visualization of the knowledge extraction process is a front line to reveal the detail process and data structure, which is an advanced technique for the presentation of data modeling. However, the mechanisms for healthcare are challenging and dynamic processes to gain a clear insight or understanding of patient care. In this paper, we proposed a new approach of architecture and optimization of data mining modeling for visualization of knowledge extraction by analyzing clinical data sets to define the determinant attributes through modeling techniques. Therefore, architecture for the visualization of the knowledge extraction process is a systematic approach to support users to the best of their knowledge of the issues over the challenge of visualizing techniques. The proposed approach is capable and dynamic to handle and analyze large-scale data in its dimension and context. Such a variable is defined using various techniques to characterize them towards the detection of determinant variables as its influential circumstance. We focused on modeling based visualization as model representation, factor's interaction and integration. The detection process experimented in a different approach and justification as discussed in section five. The finding showed a deep understandability for an advanced and dynamic data mining modeling techniques to integrate applications with domain contexts for the optimal and understandable decision process. The strength of this approach is the depth for visualization towards the knowledge extraction process and its understandability for users as per their background and circumstances. It is also essential to inference for architecture based modeling and visualization for large scale data. Researchers, physicians, experts, and other users are the potentials to refer to these novel ideas and findings.
51,585
Title: Outcomes-based appropriation of context-aware ubiquitous technology across educational levels Abstract: This review study investigates the appropriation of sensing technology in context-aware ubiquitous learning (CAUL) in the fields of sciences, engineering, and humanities. 40 empirical studies with concrete learning outcomes across mandatory and higher education have been systematically reviewed and thematically analyzed with an outcomes-based teaching and learning approach. Four derived themes have been found to describe the design and implementation of CAUL, including learner-centeredness, technological facilitation, learning ecology, and research evaluation. The learning processes enabled by context-aware sensing technology have been explicated, revealing specific ways to apply new technologies in formal and informal environments. The analysis based on intended learning outcomes suggest that more efforts should be directed to fostering competence in analyzing and creating in mandatory education, and to creating in tertiary settings. Finally, unequal distribution of CAUL implementation across world regions calls for more technological appropriation in Southeast Asia and Africa. Specific suggestions on how to improve CAUL are also provided to better prepare learners in the twenty-first century.
51,605
Title: Connected civic gaming: rethinking the role of video games in civic education Abstract: Rising political polarization and the spread of disinformation have highlighted the need to re-assess and broaden existing approaches to civic education. Though video games have been presented as tools that could capitalize on youth's interest-driven engagement with technology to support situated modes of civic learning, their actual contributions remain contested. This conceptual paper sets out to update and expand existing approaches to the civic role of video games by offering a "connected civic gaming" framework. Connected civic gaming brings together two approaches: first, "connected civics" highlights the potential of digital technologies to create consequential connections between youth's interest driven cultural participation and civic modes of action. Second, "connected gaming" stresses the importance of positioning youth not only as game-players but also as makers of video games and active participants in the emerging communities that surround them. Accordingly, we offer a classification of the diverse civic contributions of game-playing and making, calling for two main shifts in research on civic video games: (1) overcoming the depiction of games as standalone interventions, and integrating them in broader educational efforts; and (2) a stronger emphasis on offering youth decision making power in- and about the games they play.
51,661
Title: Gompertz-Lindley distribution and associated inference Abstract: In this paper, we present a Gompertz-Lindley distribution obtained by mixing the frailty parameter of Gompertz distribution by Lindley distribution. The resulting model is more flexible having increasing, decreasing and upside-down bathtub shaped failure rate function. The parameters are estimated by the maximum likelihood method and their performance is examined by extensive simulation studies. Two practical examples are provided to illustrate the applicability of the model.
51,668
Title: Ratio detections for change point in heavy tailed observations Abstract: This article modifies the ratio test so that we can detect the change which happens in the latter half observations effectively. Moreover, we propose a new ratio test on the self-normalized numerator and the self-normalized denominator respectively. We establish the asymptotic properties under the null and alternative hypotheses. We also propose a new estimator to locate the early or later change. We check the theoretical validity of the block bootstrap approximation for the modified tests and give the weak convergence of the proposed estimator. Simulations demonstrate the validity of the block bootstrap procedures and the advantage of the proposed estimator. An application on a real data is illustrated for the practicability of the proposed test and the estimators.
51,755