The solution of the problem of the displacement of the perihelion of Mercury in Einstein's theory of gravitation, taking into account the mechanism of formation of matter from dark energy, has been
obtained. It is shown that the observed value of the precession imposes a restriction on the equation of state of dark energy in the case of static fields
The paper discusses the modeling and prediction of the climate of our planet with the use of artificial intelligence AIDOS-X. We have developed a number of semantic information models, demonstrating the presence of the elements of similarity between the motion of the lunar orbit and the displacement of the instantaneous pole of the Earth. It was found that the movement of the poles of the Earth leading to the variations in the magnetic field, seismic events, as well as violations of the global atmospheric circulation and water, and particular to the emergence of episodes such as El Niño and La Niña. Through semantic information models studied some equatorial regions of the Pacific Ocean, as well as spatial patterns of temperate latitudes, revealed their relative importance for the prediction of global climatic disturbances in the tropical and temperate latitudes. The reasons of occurrence of El Niño Modoki and their relationship with the movement of elements of the lunar orbit in the long-term cycles are established. Earlier, we had made a forecast of the occurrence of El Niño episode in 2015. Based on the analysis of semantic models concluded that the expected El Niño classical type. On the basis of the prediction block AIDOS-X calculated monthly evolution scenario of global climate anomalies. In this paper, the analysis of the actual implementation forecast of El Niño since its publication in January 2015 - before June 2015. It is shown that the predicted scenario of climatic anomalies actually realized. Calculations of future climate scenarios with system «Aidos-X» recognition module indicate that further possible abnormal excess temperature indicators of surface ocean waters in regions Nino 1,2 and Nino3,4 for 2015 may be comparable with similar abnormalities in the catastrophic El Niño of 1997-1998.
We have allocated the basic sources of uncertainty in various industrial and economic situations. We have also considered the role and the tasks of forecasting in the management of industrial companies, particularly in the rocket and space industry. We
have discussed the methods of organizational and economic forecasting - statistical, expert, combined, including foresight and given some suggestions for improving the forecasting and planning mechanisms for practical use when creating space systems
The statistics of objects of non-numerical nature (statistics of non-numerical objects, non-numerical data statistics, non-numeric statistics) is the area of mathematical statistics, devoted to the analysis methods of non-numeric data. Basis of applying the results of mathematical statistics are probabilistic-statistical models of real phenomena and processes, the most important (and often only) which are models for obtaining data. The simplest example of a model for obtaining data is the model of the sample as a set of independent identically distributed random variables. In this article we have considered the basic probabilistic models for obtaining non-numeric data. Namely, the models of dichotomous data, results of paired comparisons, binary relations, ranks, the objects of general nature. We have discussed the various options of probabilistic models and their practical use. For example, the basic probabilistic model of dichotomous data - Bernoulli vector (Lucian) i.e. final sequence of independent Bernoulli trials, for which the probabilities of success may be different. The mathematical tools of solutions of various statistical problems associated with the Bernoulli vectors are useful for the analysis of random tolerances; random sets with independent elements; in processing the results of independent pairwise comparisons; statistical methods for analyzing the accuracy and stability of technological processes; in the analysis and synthesis of statistical quality control plans (for dichotomous characteristics); the processing of marketing and sociological questionnaires (with closed questions like "yes" - "no"); the processing of socio-psychological and medical data, in particular, the responses to psychological tests such as MMPI (used in particular in the problems of human resource management), and analysis of topographic maps (used for the analysis and prediction of the affected areas for technological disasters, distributing corrosion, propagation environmentally harmful pollutants, various diseases (including myocardial infarction), in other situations), etc.
In practice, we often encounter the problem of
determining a system state based on results of various
measurements. Measurements are usually
accompanied by random errors; therefore, we should
not talk about the definition of the system state but its
estimation through stochastic processing of
measurement results. In the monograph by E. A.
Semenchina and M. Z. Laipanova [1] it was
investigated for one-step filtering of the measurement
errors of the vector of demand in balance model of
Leontiev, as well as multistage optimal filtering of
measurement errors of the vector of demand. In this
article, we have delivered and investigated the inverse
problem for the optimal one-step and multi-step
filtering of the measurement errors of the vector of
demand. For its solution, the authors propose the
method of conditional optimization and using given
and known disturbance to determine (estimate) the
matrix elements for one-step filtering of measurement
errors and for multi-stage filtration: for given variables
and known disturbance to determine the elements of
the matrix. The solution of the inverse problem is
reduced to the solution of constrained optimization
problems, which is easily determined using in MS
Excel. The results of the research have been outlined
in this article, they are of considerable interest in
applied researches. The article also formulated and the
proposed method of solution of inverse in a dynamic
Leontiev model
Some estimators of the probability density function
in spaces of arbitrary nature are used for various
tasks in statistics of non-numerical data. Systematic
exposition of the theory of such estimators has been
started in our articles [3, 4]. This article is a direct
continuation of these works [3, 4]. We will regularly
use references to conditions and theorems of the
articles [3, 4], in which introduced several types of
nonparametric estimators of the probability density.
We have studied linear estimators. In this article, we
consider particular cases - kernel density estimates in
discrete spaces. When estimating the density of the
one-dimensional random variable, kernel estimators
become the Parzen-Rosenblatt estimators. Under
different conditions, we prove the consistency and
asymptotic normality of kernel density estimators.
We have introduced the concept of "preferred rate
differences" and are studied nuclear density
estimators based on it. We have introduced and
studied natural affinity measures which are used in
the analysis of the asymptotic behavior of kernel
density estimators. Kernel density estimates are
considered for sequences of spaces with measures.
We give the conditions under which the difference
between the densities of probability distributions and
of the mathematical expectations of their nuclear
estimates uniformly tends to 0. Is established the
uniform convergence of the variances. We find the
conditions on the kernel functions, in which take
place these theorems about uniform convergence. As
examples, there are considered the spaces of fuzzy
subsets of finite sets and the spaces of all subsets of
finite sets. We give the condition to support the use
of kernel density estimation in finite spaces. We
discuss the counterexample of space of rankings in
which the application of kernel density estimators
can not be correct
In practical use of methods of applied statistics we do not apply separate methods for describing data, estimation, testing hypotheses, but we must use deployed whole procedures - the so-called "statistical technology". The concept of "statistical technology" is similar to the concept of "technological process" in the theory and practice of organization of production. It is quite natural that some statistical technology can better meet the needs of the researcher (user, statistics) than others, some - are modern, and others - outdated, some properties are studied, and the others - no. It is important to stress that a qualified and efficient use of statistical methods - this is not one single statistical hypothesis testing and estimation of characteristics or parameters of a given distribution from fixed family. This kind of operations - only the individual building blocks that make up the statistical technology. The procedure of the statistical data analysis - is an information process, in other words, one or other information technology. Statistical information is subject to a variety of operations (series, parallel, or more complex schemes). In this article we discuss statistical technologies and the problem of "docking" algorithms. We introduce the concept of "high statistical technologies" and then we prove the necessity of their development and application. As the examples we have given the researches of Institute of high statistical technologies and econometrics of Bauman Moscow State Technical University. We have also considered a number of education problems in domain of high statistical technologies
The article presents a theoretical substantiation, methods of numerical calculations and software implementation of the decision of problems of statistics, in particular the study of statistical distributions, methods of information theory. On the basis of empirical data by calculation we have determined the number of observations used for the analysis of statistical distributions. The proposed method of calculating the amount of information is not based on assumptions about the independence of observations and the normal distribution, i.e., is non-parametric and ensures the correct modeling of nonlinear systems, and also allows comparable to process heterogeneous (measured in scales of different types) data numeric and non-numeric nature that are measured in different units. Thus, ASC-analysis and "Eidos" system is a modern innovation (ready for implementation) technology solving problems of statistical methods of information theory. This article can be used as a description of the laboratory work in the disciplines of: intelligent systems; knowledge engineering and intelligent systems; intelligent technologies and knowledge representation; knowledge representation in intelligent systems; foundations of intelligent systems; introduction to neuromaturation and methods neural networks; fundamentals of artificial intelligence; intelligent technologies in science and education; knowledge management; automated system-cognitive analysis and "Eidos" intelligent system which the author is developing currently, but also in other disciplines associated with the transformation of data into information, and its transformation into knowledge and application of this knowledge to solve problems of identification, forecasting, decision making and research of the simulated subject area (which is virtually all subjects in all fields of science)
In the USSR higher attestation Commission from
1975 to the collapse of the USSR was subordinated
not to the Ministry of education and science, but to
the Council of Ministers of the USSR directly.
However, since then there is a steady trend of gradual
reduction of the status of the Commission. Today
it is not just included in the Ministry of education,
it is just one of the units of one of its structures:
the Rosobrnadzor. Reduced status of the HAC inevitably
leads to a decline in the status and in the adequacy
of scientific degrees assigned as well as scientific
ranks. This process of devaluation of traditional
academic degrees and titles assigned to the HAC,
has reached the point when a few years ago there
were abolished salary increments for them. Now,
instead of that, every university and research institutes
have developed their local, i.e. non-comparable
with each other scientometric methods of evaluation
of the results of scientific and teaching activities.
Despite the diversity of these techniques, there is a
common thing among all of them, which is the disproportionate
role of the h-index. The value of the
Hirsch index starts to play an important role in the
protection, when considering competitive cases for
positions, as well as in determining the monthly
rewards for the results of scientific and teaching
activities. By itself, this index is well founded, theoretically.
However, in connection with the practice
of its application in our conditions, in the collective
consciousness of the scientific community there was
a kind of mania, which the authors call the "Hirschmania".
This mania is characterized by elevated
unhealthy interest to the value of the Hirsch index,
as well as incorrect manipulation of its value, i.e.
inadequate artificial exaggeration of this value, as
well as a number of negative consequences of that
interest. In this study we have made an attempt to construct a quantitative measure for assessing the
extent of improper manipulation of the value of the
Hirsch index, and offered a science-based modification
of the h-index, insensitive (resistant) to the manipulation.
The article presents a technique for all
the numerical calculations, which is simple enough
for any author to use
We have analyzed the current state of the main computer-statistical methods, identified achievements and existing problems, outlined the prospects of further movement and formulated the problems to be solved. We have also discussed the Monte Carlo methods, pseudo-random numbers, simulation, bootstrap and resampling, the automated system-cognitive analysis. We have considered the applications of computer statistics in controlling and properties of statistical packages as the tools for researchers