In the article we have concerned the researched mathematical method for the process of droplet dispersion on a horizontal surface. On the basis of the asymptotical method and the method of the division of variables we have found the approximate analytical solution of the problem
Is it possible to automate the study of the properties of numbers and their relationship so that the results of this study can be formulated in the form of statements, indicating the specific quantity of information stored in them? To answer this question it is offered to apply the same method that is widely tested and proved in studies of real objects and their relations in various fields to study the properties of numbers in the theory of numbers namely - the automated system-cognitive analysis (A.S.C. analysis), based on information theory
The article investigates the first boundary problem for the partial differential equations of the second order with deviating argument. The method of separation of variables is used to prove the solubility of the boundary problem in the desired class of functions
Applied Statistics - the science of how to analyze
the statistical data. As an independent scientificpractical
area it develops very quickly. It includes
numerous widely and deeply developed scientific
directions. Those who use the applied statistics and
other statistical methods, usually focused on specific
areas of study, ie, are not specialists in applied
statistics. Therefore, it is useful to make a critical
analysis of the current state of applied statistics and
discuss trends in the development of statistical
methods. Most of the practical importance of
applied statistics justifies the usefulness of the work
on the development of its methodology, in which the
field of scientific and applied activities would be
considered as a whole. We have given some brief
information about the history of applied statistics.
Based on Scientometrics of Applied Statistics we
state that each expert has only a small part of
accumulated knowledge in this area. We discuss five
topical areas in which modern applied statistics
develops, ie five "points of growth": nonparametric,
robustness, bootstrap, statistics of interval data, and
statistics of non-numerical data. We discuss some
details of the basic ideas of a non-numerical
statistics. In the last more than 60 years in Russia,
there has been a huge gap between official statistics
and the scientific community of experts on statistical
methods
Without science it would be impossible to form a full environmental consciousness. To increase the validity and weight of the findings on the impact of environment on quality of life, it is necessary to quantify the strength and direction of the influence of diverse environmental factors. However, it appears that this is quite problematic for a number of reasons. First, it is the lack or inaccessibility of source of data which is necessary for such type of research. The same data, which still can be found cover just small periods of observations (small longitudinal research data), and their completion, including performing experiments, is fundamentally impossible. As a result, it is impossible to require such full data replications, which is a necessary condition for correct applying of factor analysis. Secondly, environmental factors are described with heterogeneous indices measured in different types of measurement scales (nominal, ordinal and numerical) and in different measurement units. Mathematical methods of comparable processing of such data, and the right software tools for these methods, generally speaking, do not exist. Third, these tasks are large-scale problems, i.e. they are not talking about 5 or max 7 factors as it was in factor analysis, but about hundreds and thousands. Fourthly, the original data is noisy and require sustainable methods. Fifthly, environmental factors are interrelated and require nonlinear nonparametric approaches. To solve these problems it is proposed to apply a new innovative intelligent technology: automated system-cognitive analysis and its software tool – a system called "Eidos". We have also given a brief numerical example of assessing the impact of environmental factors on life expectancy and causes of death
Statistical control is a sampling control based on the probability theory and mathematical statistics. The article presents the development of the methods of statistical control in our country. It discussed the basics of the theory of statistical control – the plans of statistical control and their operational characteristics, the risks of the supplier and the consumer, the acceptance level of defectiveness and the rejection level of defectiveness. We have obtained the asymptotic method of synthesis of control plans based on the limit average output level of defectiveness. We have also developed the asymptotic theory of single sampling plans and formulated some unsolved mathematical problems of the theory of statistical control
Nonparametric estimates of the probability
distribution density in spaces of arbitrary nature are
one of the main tools of non-numerical statistics.
Their particular cases are considered - kernel density
estimates in spaces of arbitrary nature, histogram
estimations and Fix-Hodges-type estimates. The
purpose of this article is the completion of a series
of papers devoted to the mathematical study of the
asymptotic properties of various types of
nonparametric estimates of the probability
distribution density in spaces of general nature.
Thus, a mathematical foundation is applied to the
application of such estimates in non-numerical
statistics. We begin by considering the mean square
error of the kernel density estimate and, in order to
maximize the order of its decrease, the choice of the
kernel function and the sequence of the blur
indicators. The basic concepts are the circular
distribution function and the circular density. The
order of convergence in the general case is the same
as in estimating the density of a numerical random
variable, but the main conditions are imposed not on
the density of a random variable, but on the circular
density. Next, we consider other types of
nonparametric density estimates - histogram
estimates and Fix-Hodges-type estimates. Then we
study nonparametric regression estimates and their
application to solve discriminant analysis problems
in a general nature space
We consider an approach to the transition from
continuous to discrete scale which was defined by
means of step of quantization (i.e. interval of
grouping). Applied purpose is selecting the number
of gradations in sociological questionnaires. In
accordance with the methodology of the general
stability theory, we offer to choose a step so that the
errors, generated by the quantization, were of the
same order as the errors inherent in the answers of
respondents. At a finite length of interval of the
measured value change of the scale this step of
quantization uniquely determines the number of
gradations. It turns out that for many issues gated it
is enough to point 3 - 6 answers gradations (hints).
On the basis of the probabilistic model we have
proved three theorems of quantization. They are
allowed to develop recommendations on the choice
of the number of gradations in sociological
questionnaires. The idea of "quantization" has
applications not only in sociology. We have noted,
that it can be used not only to select the number of
gradations. So, there are two very interesting
applications of the idea of "quantization" in
inventory management theory - in the two-level
model and in the classical Wilson model taking into
account deviations from it (shows that
"quantization" can use as a way to improve
stability). For the two-level inventory management
model we proved three theorems. We have
abandoned the assumption of Poisson demand,
which is rarely carried out in practice, and we give
generally fairly simple formulas for finding the
optimal values of the control parameters,
simultaneously correcting the mistakes of
predecessors. Once again we see the interpenetration
of statistical methods that have arisen to analyze
data from a variety of subject areas, in this case,
from sociology and logistics. We have another proof
that the statistical methods - single scientificpractical
area that is inappropriate to share by areas
of applications
In the article the application of systemic-cognitive analysis, its mathematical model - the system theory of the information and its program toolkit - "Eidos" system for synthesis of the generalized images of classes, their abstraction, classification of the generalized images (clusters and constructs) comparisons of concrete images with the generalized images (identification) are examined. We suggest a new approach to the digitization of images, based on the use of the polar coordinate system, the center of gravity of the image and its contour. Before digitizing images we can use their changes to standardize the position of the picture-frames, their size and rotation. Therefore, if you specify this option, the results of digitization and image ASC-analysis can be invariant (independent) to their position, size and rotation. This means that in the model on the basis of a number of specific examples we will create one image of each class of images, independent of their specific implementations, i.e., the "Eidos" of these images (in the sense of Plato) - a prototype or archetype (in the Jungian sense) images. But the "Eidos" system provides not only the formation of prototype images, which quantitatively reflects the amount of information in the image elements of the prototype, but the removal of all irrelevant to identification (abstraction), and the comparison of specific images with generic (identification) and the generalized images of images together (classification). The article provides a detailed numerical example of ASC- analysis of images
In the article the application of systemic-cognitive analysis and its mathematical model i.e. the system theory of the information and its program toolkit which is "Eidos" system for loading images from graphics files, synthesis of the generalized images of classes, their abstraction, classification of the generalized images (clusters and constructs) comparisons of concrete images with the generalized images (identification) are examined. We suggest using the theory of information for processing the data and its size for every pixel which indicates that the image is of a certain class. A numerical example is given in which on the basis of a number of specific examples of images belonging to different classes, forming generalized images of these classes, independent of their specific implementations, i.e., the "Eidoses" of these images (in the definition of Plato) – the prototypes or archetypes of images (in the definition of Jung). But the "Eidos" system provides not only the formation of prototype images, which quantitatively reflects the amount of information in the elements of specific images on their belonging to a particular proto-types, but a comparison of specific images with generic (identification) and the
generalization of pictures images with each other (classification)