-
Tolerance Limits on Order Statistics in Future Samples Coming from the Two-Parameter Exponential Distribution
Nicholas A. Nechval,
Konstantin N. Nechval
Issue:
Volume 5, Issue 2-1, March 2016
Pages:
1-6
Received:
9 September 2015
Accepted:
10 September 2015
Published:
30 November 2015
Abstract: This paper presents an innovative approach to constructing lower and upper tolerance limits on order statistics in future samples. Attention is restricted to invariant families of distributions under parametric uncertainty. The approach used here emphasizes pivotal quantities relevant for obtaining tolerance factors and is applicable whenever the statistical problem is invariant under a group of transformations that acts transitively on the parameter space. It does not require the construction of any tables and is applicable whether the past data are complete or Type II censored. The proposed approach requires a quantile of the F distribution and is conceptually simple and easy to use. For illustration, the two-parameter exponential distribution is considered. A practical example is given.
Abstract: This paper presents an innovative approach to constructing lower and upper tolerance limits on order statistics in future samples. Attention is restricted to invariant families of distributions under parametric uncertainty. The approach used here emphasizes pivotal quantities relevant for obtaining tolerance factors and is applicable whenever the s...
Show More
-
Efficient Approach to Pattern Recognition Based on Minimization of Misclassification Probability
Nicholas A. Nechval,
Konstantin N. Nechval
Issue:
Volume 5, Issue 2-1, March 2016
Pages:
7-11
Received:
9 September 2015
Accepted:
10 September 2015
Published:
30 November 2015
Abstract: In this paper, an efficient approach to pattern recognition (classification) is suggested. It is based on minimization of misclassification probability and uses transition from high dimensional problem (dimension p≥2) to one dimensional problem (dimension p=1) in the case of the two classes as well as in the case of several classes with separation of classes as much as possible. The probability of misclassification, which is known as the error rate, is also used to judge the ability of various pattern recognition (classification) procedures to predict group membership. The approach does not require the arbitrary selection of priors as in the Bayesian classifier and represents the novel pattern recognition (classification) procedure that allows one to take into account the cases, which are not adequate for Fisher’s classification rule (i.e., the distributions of the classes are not multivariate normal or covariance matrices of those are different or there are strong multi-nonlinearities). Moreover, it also allows one to classify a set of multivariate observations, where each of the observations belongs to the same unknown class. For the cases, which are adequate for Fisher’s classification rule, the proposed approach gives the results similar to that of Fisher’s classification rule. For illustration, practical examples are given.
Abstract: In this paper, an efficient approach to pattern recognition (classification) is suggested. It is based on minimization of misclassification probability and uses transition from high dimensional problem (dimension p≥2) to one dimensional problem (dimension p=1) in the case of the two classes as well as in the case of several classes with separati...
Show More
-
A New Approach to Dose Estimation in Drug Development Based on Maximization of Likelihood of Grouped Data
Nicholas A. Nechval,
Gundars Berzins,
Vadims Danovics
Issue:
Volume 5, Issue 2-1, March 2016
Pages:
12-20
Received:
2 November 2015
Accepted:
2 November 2015
Published:
30 November 2015
Abstract: Identifying the ‘right’ dose is one of the most critical and difficult steps in the clinical development process of any medicinal drug. Its importance cannot be understated: selecting too high a dose can result in unacceptable toxicity and associated safety problems, while choosing too low a dose leads to smaller chances of showing sufficient efficacy in confirmatory trials, thus reducing the chance of approval for the drug. The optimal dose is the dose that gives the desired effect with minimum side effects. The dose of a drug is of course ‘optimal’ only for a given subject, but not necessarily for any other. In view of this the objective of a dose-finding trials is not to determine a single fixed dose for use in the early phases of clinical trials or in medical practice, but to determine an interval of doses within which there is a stated degree of confidence that the defined, acceptable therapeutic response and the frequency of adverse reactions will lie above and below, respectively, certain acceptable predetermined levels. If the subject samples used in the dose finding studies adequately represent the subject population for which the drug is intended, the interval of doses so defined can be applied to the subject population as a whole. In this paper, we propose the technique based on maximization of likelihood function in order to estimate the maximal tolerated dose (MTD) and minimal effective dose (MED) on the basis of l samples of subjects, which are grouped in a simplest way. The necessary and sufficient conditions for the existence and uniqueness of the maximum likelihood estimates are derived. The proposed approach to dose estimation in drug development is simple and suitable for medical practice. The numerical examples are given.
Abstract: Identifying the ‘right’ dose is one of the most critical and difficult steps in the clinical development process of any medicinal drug. Its importance cannot be understated: selecting too high a dose can result in unacceptable toxicity and associated safety problems, while choosing too low a dose leads to smaller chances of showing sufficient effic...
Show More
-
Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach
Nicholas A. Nechval,
Gundars Berzins,
Vadims Danovics
Issue:
Volume 5, Issue 2-1, March 2016
Pages:
21-28
Received:
7 January 2016
Accepted:
8 January 2016
Published:
27 January 2016
Abstract: Age replacement strategies, where a unit is replaced upon failure or on reaching a predetermined age, whichever occurs first, provide simple and intuitively attractive replacement guidelines for technical units. Within theory of stochastic processes, the optimal preventive replacement age, in the sense of leading to minimal expected costs per unit of time when the strategy is used for a sequence of similar units over a long period of time, is derived by application of the renewal reward theorem. The mathematical solution to the problem of what is the optimal age for replacement is well known for the case when the parameter values of the underlying lifetime distributions are known with certainty. In actual practice, such is simply not the case. When these models are applied to solve real-world problems, the parameters are estimated and then treated as if they were the true values. The risk associated with using estimates rather than the true parameters is called estimation risk and is often ignored. When data are limited and (or) unreliable, estimation risk may be significant, and failure to incorporate it into the model design may lead to serious errors. Its explicit consideration is important since decision rules that are optimal in the absence of uncertainty need not even be approximately optimal in the presence of such uncertainty. In the present paper, for efficient optimization of statistical decisions under parametric uncertainty, the pivotal quantity averaging (PQA) approach is suggested. This approach represents a new simple and computationally attractive statistical technique based on the constructive use of the invariance principle in mathematical statistics. It allows one to carry out the transition from the original problem to the equivalent transformed problem (in terms of pivotal quantities and ancillary factors) via invariant embedding a sample statistic in the original problem. In this case, the statistical optimization of the equivalent transformed problem is carried out via ancillary factors. Unlike the Bayesian approach, the proposed approach is independent of the choice of priors. This approach allows one to eliminate unknown parameters from the problem and to find the better decision rules, which have smaller risk than any of the well-known decision rules. To illustrate the proposed approach, the numerical examples are given.
Abstract: Age replacement strategies, where a unit is replaced upon failure or on reaching a predetermined age, whichever occurs first, provide simple and intuitively attractive replacement guidelines for technical units. Within theory of stochastic processes, the optimal preventive replacement age, in the sense of leading to minimal expected costs per unit ...
Show More
-
Innovative Planning in-Service Inspections of Fatigued Structures Under Parametric Uncertainty of Lifetime Models
Nicholas A. Nechval,
Vadims Danovics,
Natalija Ribakova
Issue:
Volume 5, Issue 2-1, March 2016
Pages:
29-39
Received:
17 January 2016
Accepted:
19 January 2016
Published:
4 February 2016
Abstract: The main aim of this paper is to present more accurate stochastic fatigue models for solving the fatigue reliability problems, which are attractively simple and easy to apply in practice for situations where it is difficult to quantify the costs associated with inspections and undetected cracks. From an engineering standpoint the fatigue life of a structure consists of two periods: (i) crack initiation period, which starts with the first load cycle and ends when a technically detectable crack is presented, and (ii) crack propagation period, which starts with a technically detectable crack and ends when the remaining cross section can no longer withstand the loads applied and fails statically. Periodic inspections of fatigued structures, which are common practice in order to maintain their reliability above a desired minimum level, are based on the conditional reliability of the fatigued structure. During the period of crack initiation, when the parameters of the underlying lifetime distributions are not assumed to be known, for effective in-service inspection planning (with decreasing intervals as alternative to constant intervals often used in practice for convenience in operation), the pivotal quantity averaging (PQA) approach is offered. During the period of crack propagation (when the damage tolerance situation is used), the approach, based on an innovative crack growth equation, to in-service inspection planning (with decreasing intervals between sequential inspections) is proposed to construct more accurate reliability-based inspection strategy in this case. To illustrate the suggested approaches, the numerical examples are given.
Abstract: The main aim of this paper is to present more accurate stochastic fatigue models for solving the fatigue reliability problems, which are attractively simple and easy to apply in practice for situations where it is difficult to quantify the costs associated with inspections and undetected cracks. From an engineering standpoint the fatigue life of a ...
Show More
-
A Novel Approach to Finding Sampling Distributions for Truncated Laws Via Unbiasedness Equivalence Principle
Nicholas A. Nechval,
Sergey Prisyazhnyuk,
Vladimir F. Strelchonok
Issue:
Volume 5, Issue 2-1, March 2016
Pages:
40-48
Received:
26 January 2016
Accepted:
28 January 2016
Published:
23 February 2016
Abstract: Truncated distributions arise naturally in many practical situations. In this paper, the problem of finding sampling distributions for truncated laws is considered. This problem concerns the very important area of information processing in Industrial Engineering. It remains today perhaps the most difficult and important of all the problems of mathematical statistics that require considerable efforts and great skill for investigation. In a given problem, most would prefer to find a sampling distribution for truncated law by the simplest method available. For many situations encountered in textbooks and in the literature, the approach discussed here is simple and straightforward. It is based on use of the unbiasedness equivalence principle (UEP) that represents a new idea which often allows one to provide a neat method for finding sampling distributions for truncated laws. It avoids explicit integration over the sample space and the attendant Jacobian but at the expense of verifying completeness of the recognized family of densities. Fortunately, general results on completeness obviate the need for this verification in many problems involving exponential families. The proposed approach allows one to obtain results for truncated laws via the results obtained for non-truncated laws. It is much simpler than the known approaches. In many situations this approach allows one to find the results for truncated laws with known truncation points and to estimate system reliability in a simple way. The approach can also be used to find the sampling distribution for truncated law when some or all of its truncation parameters are left unspecified. The illustrative examples are given.
Abstract: Truncated distributions arise naturally in many practical situations. In this paper, the problem of finding sampling distributions for truncated laws is considered. This problem concerns the very important area of information processing in Industrial Engineering. It remains today perhaps the most difficult and important of all the problems of mathe...
Show More
-
Efficient Predictive Inferences for Future Outcomes Under Parametric Uncertainty of Underlying Models
Nicholas A. Nechval,
Natalija Ribakova,
Gundars Berzins
Issue:
Volume 5, Issue 2-1, March 2016
Pages:
49-55
Received:
31 January 2016
Accepted:
2 February 2016
Published:
23 February 2016
Abstract: Predictive inferences (predictive distributions, prediction and tolerance limits) for future outcomes on the basis of the past and present knowledge represent a fundamental problem of statistics, arising in many contexts and producing varied solutions. In this paper, new-sample prediction based on a previous sample (i.e., when for predicting the future outcomes in a new sample there are available the observed data only from a previous sample), within-sample prediction based on the early data from a current experiment (i.e., when for predicting the future outcomes in a sample there are available the early data only from that sample), and new-within-sample prediction based on both the early data from that sample and the data from a previous sample (i.e., when for predicting the future outcomes in a new sample there are available both the early data from that sample and the data from a previous sample) are considered. It is assumed that only the functional form of the underlying distributions is specified, but some or all of its parameters are unspecified. In such cases ancillary statistics and pivotal quantities, whose distribution does not depend on the unknown parameters, are used. In order to construct predictive inferences for future outcomes, the invariant embedding technique representing the exact pivotal-based method is proposed. In particular, this technique can be used for optimization of inventory management problems. A practical example is given.
Abstract: Predictive inferences (predictive distributions, prediction and tolerance limits) for future outcomes on the basis of the past and present knowledge represent a fundamental problem of statistics, arising in many contexts and producing varied solutions. In this paper, new-sample prediction based on a previous sample (i.e., when for predicting the fu...
Show More