A method for the calculation of the one-particle generalized coefficients of fractional parentage for an arbitrary number of j-orbits with isospin and an arbitrary number of oscillator quanta (generalized CFPs or GCFPs) is presented. The approach is based on a simple enumeration scheme for antisymmetric many-particle states, an efficient algorithm for the calculation of the CFPs for a single j-orbit with isospin, and a general procedure for the computation of the angular momentum (isospin) coupling coefficients describing the transformation between different momentum-coupling schemes. The method provides fast calculation of GCFPs for a given particle number and produces results possessing small numerical uncertainties. The introduced GCFPs make it feasible calculation of expectation values of one-particle nuclear shell-model operators within the isospin formalism.
This paper is focused on the Bayes approach to multiextremal optimization problems, based on modelling the objective function by Gaussian random field (GRF) and using the Euclidean distance matrices with fractional degrees for presenting GRF covariances. A recursive optimization algorithm has been developed aimed at maximizing the expected improvement of the objective function at each step, using the results of the optimization steps already performed. Conditional mean and conditional variance expressions, derived by modelling GRF with covariances expressed by fractional Euclidean distance matrices, are used to calculate the expected improvement in the objective function. The efficiency of the developed algorithm was investigated by computer modelling, solving the test tasks, and comparing the developed algorithm with the known heuristic multi-extremal optimization algorithms.
In this paper, we are analyzing the results of native Lithuanian speaker recognition and identification using long short-term memory deep neural network. We look at recognition accuracy and identify further potential improvements. Dataset used for training and speaker recognition consists of over 370 unique speakers, who provide their voice utterances in Lithuanian language. In this paper we present results that are derived from part of this dataset.
Population initialization is one of the important tasks in evolutionary and genetic algorithms (GAs). It can affect considerably the speed of convergence and the quality of the obtained results. In this paper, some heuristic strategies (procedures) for construction of the initial populations in genetic algorithms are investigated. The purpose is to try to see how the different population initialization strategies (procedures) can influence the quality of the final solutions of GAs. Several simple procedures were algorithmically implemented and tested on one of the hard combinatorial optimization problems, the quadratic assignment problem (QAP). The results of the computational experiments demonstrate the usefulness of the proposed strategies. In addition, these strategies are of quite general character and may be easily transferred to other population-based metaheuristics (like particle swarm or bee colony optimization methods).
Occurrence of the agent paradigm and its further applications have stimulated the emergence of new concepts and methodologies in computer science. Today terms like multi-agent system, agent-oriented methodology, and agent-oriented programming (AOP) are widely used. The aim of this paper is to clarify the validity of usage of the terms AOP and AOP language. This is disclosed in two phases of an analysis process. Determining to which concepts, terms like agent, programming, object-oriented analysis and design, object-oriented programming, and agent-oriented analysis and design correspond is accomplished in the first phase. Analysis of several known agent system engineering methodologies in terms of key concepts used, final resulting artifacts, and their relationship with known programming paradigms and modern tools for agent system development is performed in the second phase. The research shows that in the final phase of agent system design and in the coding stage, the main artifact is an object, defined according to the rules of the object-oriented paradigm. Hence, we conclude that the computing society still does not have AOP owing to the lack of an AOP language. Thus, the term AOP is very often incorrectly assigned to agent system development frameworks that in all cases, transform agents into objects.
Šiame darbe sudarytas rekurentinis paslėptųjų Markovo modelių parametrų vertinimo algoritmas. Paslėptieji Markovo modeliai modeliuojami Gauso skirstiniu, kurio parametrai pasiskirstę pagal daugiamatį normalųjį dėsnį su nežinomais vidurkių vektoriumi ir kovariacijų matrica. Nežinomų parametrų įverčiai gaunami didžiausio tikėtinumo metodu. Rekurentinis algoritmas sudarytas remiantis didžiausio tikėtinumo metodu išvestomis formulėmis ir klasikiniu EM algoritmu. Kadangi rekurentinio algoritmo vykdymo laikas yra proporcingas apdorojamų stebėjimų skaičiui, tai jis gali būti naudojamas modelio parametrų vertinimui realiu laiku. Realizuoto rekurentinio EM algoritmo savybės buvo ištirtos kompiuteriniu eksperimentu klasterizuojant duomenis. Jis taip pat gali būti taikomas duomenų klasifikavimo ir atpažinimo realiu laiku uždaviniams spręsti.
This paper presents the protons and neutrons distributions in atomic nucleus shells calculation algorithm which may be used for ab initio no-core nuclear shell model computations. The problem of enumeration of many-particle states is formulated on energetic basis instead of application of the traditional scheme for states classification. The algorithm provides calculations of protons and neutrons occupation restrictions for nuclear shells for an arbitrary number of oscillator quanta. The reported results show that the presented algorithm significantly outperforms the traditional approach and may fit the needs of state-of-the-art no-core shell model calculations of atomic nuclei.
The article discusses the possibilities of Klaipeda region historic cemeteries destruction risk assessment using multiple criteria analytic hierarchy process (AHP). The proposing original assessment methodology developed by combining information from scientific literature on historical artefacts preservation topic with the data collected by scientists of Institute of Baltic Region History and Archaeology during their field expeditions to Klaipeda region Evangelical Lutheran Cemeteries.The results show that the process of historical cemeteries destruction risk assessment can be formalized and fully automated using AHP and modern software.
This paper addresses the issue of finding the most efficient estimator of the normal population mean when the population “Coefficient of Variation (C. V.)” is ‘Rather-Very-Large’ though unknown, using a small sample (sample-size ≤ 30). The paper proposes an “Efficient Iterative Estimation Algorithm exploiting sample “C. V.” for an efficient Normal Mean estimation”. The MSEs of the estimators per this strategy have very intricate algebraic expression depending on the unknown values of population parameters, and hence are not amenable to an analytical study determining the extent of gain in their relative efficiencies with respect to the Usual Unbiased Estimator X ̅(sample mean ~ Say ‘UUE’). Nevertheless, we examine these relative efficiencies of our estimators with respect to the Usual Unbiased Estimator, by means of an illustrative simulation empirical study. MATLAB 7.7.0.471 (R2008b) is used in programming this illustrative ‘Simulated Empirical Numerical Study’.
The paper presents the results on the dimensionality reduction technique which is based on radial basis function (RBF) theory. The technique uses RBF for mapping multidimensional data points into a low-dimensional space by interpolating the previously calculated position of so-called control points. This paper analyses various ways of selection of control points (regularized orthogonal least squares method, random and stratified selections). The experiments have been carried out with 8 real and artificial data sets. Positions of the control points in a low-dimensional space are found by principal component analysis. Combinations of RBF technique with random and stratified selections outperformed RBF with regularized orthogonal least squares algorithm regarding to computation time analysing all data sets. We demonstrate that random and stratified selections of control points are efficient and acceptable in terms of balance between projection error (stress) and time-consumption.