In engineering design, it is important to guarantee that the values of certain quantities such as stress level, noise level, and vibration level, stay below a certain threshold in all possible situations, i.e., for all possible combinations of the corresponding internal and external parameters. Usually, the number of possible combinations is so large that it is not possible to physically test the system for all these combinations. Instead, we form a computer model of the system and test this model. In this testing, we need to take into account that the computer models are usually approximate. In this paper, we show that the existing techniques for taking model uncertainty into account overestimate the uncertainty of the results. We also show how we can get more accurate estimates.

Introduction

Bounds on Unwanted Processes: An Important Part of Engineering Specifications.

An engineering system is designed to perform certain tasks. In the process of performing these tasks, the system also generates some undesirable side effects: it can generate noise, vibration, heat, and stress.

We cannot completely eliminate these undesired effects, but specifications for an engineering system usually require that the size q of each of these effects does not exceed a certain predefined threshold (bound) t. It is therefore important to check that this specification is always satisfied, i.e., qt in all possible situations.

How Can We Check That Specifications are Satisfied for All Possible Situations: Simulations are Needed.

To fully describe each situation, we need to know which parameters p1,p2, characterize this situation, and we need to know the exact values of all these parameters. These may be external parameters such as wind speed and load for a bridge. This may be internal parameters such as the exact value of the Young’s module for a material used in the design.

In practice, we usually know only the main parameters p1,,pn. For each of these parameters, we know the interval of possible values [p_i,p¯i]; see, e.g., [1–4]. For many parameters pi, this interval is described by setting a nominal value p˜i and the bound Δi on possible deviations from this nominal value. In such a setting, the interval of possible values has the form
(1)
In other cases, the bounds p_i and p¯i are given directly. However, we can always describe the resulting interval in the form (1) if we take the midpoint of this interval as p˜i and its half-width as Δi
(2)

Thus, without losing generality, we can always assume that the set of possible values of each parameter pi is given by Eq. (1).

We would like to make sure that the quantity q satisfies the desired inequality qt for all possible combinations of values pi[p_i,p¯i]. Usually, there are many such parameters, and thus, there are many possible combinations—even if we limit ourselves to extreme cases, when each parameter pi is equal to either p_i or p¯i, we will still get 2n possible combinations. It is therefore not feasible to physically check how the system behaves under all such combination. Instead, we need to rely on computer simulations.

Formulation of the Problem.

There are known techniques for using computer simulation to check that the system satisfies the given specifications for all possible combinations of these parameters; see, e.g., [5] and references therein. These techniques, however, have been originally designed for the case in which we have an exact model of the system.

In principle, we can also use these techniques in more realistic situations, when the corresponding model is only approximate. However, as we show in this paper, the use of these techniques leads to overestimation of the corresponding uncertainty. We also show that a proper modification of these techniques leads to a drastic decrease of this overestimation and thus to more accurate estimations.

How to Check Specifications When We Have an Exact Model of a System: Reminder

Case of an Exact Model: Description.

To run the corresponding computer simulations, we need to have a computer model that, given the values of the parameters p1,,pn, estimates the corresponding value of the parameter q. Let us first consider situations in which this computer model is exact, i.e., in which this model enables us to compute the exact value q
(3)

In Most Engineering Situations, Deviations From Nominal Values are Small.

Usually, possible deviations Δpi=defpip˜i from nominal values are reasonably small; see, e.g., [6]. In this paper, we will restrict ourselves to such situations.

In such situations, we can plug in the values pi=p˜i+Δpi into Eq. (3), expand the resulting expression in the Taylor series in terms of small values Δpi, and ignore terms that are quadratic (or of higher-order) in terms of Δpi
(4)
where we denote
(5)

How to Use the Linearized Model to Check That Specifications are Satisfied: Analysis of the Problem.

To make sure that we always have qt, we need to guarantee that the largest possible value q¯ of the function q does not exceed t.

How can we compute this upper bound q¯? The maximum of sum (4) is attained when each of n terms ci·Δpi attains the largest possible value. Each of these terms is a linear function of Δpi[Δi,Δi], so the desired maximum has the form (see, e.g., [6,7])
(6)

How to Estimate the Derivatives ci

In many practical cases, we have an explicit equation or, more generally, a known program for computing q(p1,,pn). In this case, by explicitly differentiating the corresponding expression—or by applying an automatic differentiation algorithm to the corresponding program—we can get equations for computing the derivatives ci.

In many real-life situations, however, in our computations, we use proprietary software—for which the corresponding program is not available to us. In such situations, we cannot use automatic differentiation tools, and we can use only the results of applying the algorithm q to different tuples.

In such situations, we need to use numerical differentiation to estimate the values ci of the derivatives. In the linear approximation
(7)
so
(8)
In particular, substituting this expression for ci, with hi=Δi, into Eq. (6), we get
(9)
where we denoted
(10)

Thus, we arrive at the following technique (see, e.g., [7]).

How to Use the Linearized Model to Check That Specifications are Satisfied: Resulting Technique.

We know:

  • An algorithm q(p1,,pn) that, given the values of the parameters p1,,pn, computes the value of the quantity q.

  • A threshold t that needs to be satisfied.

  • For each parameter pi, we know its nominal value p˜i and the largest possible deviation Δi from this nominal value.

Based on this information, we need to check whether q(p1,,pn)t for all possible combinations of values pi from the corresponding intervals [p˜iΔi,p˜i+Δi].

We can perform this checking as follows:

  1. 1.

    First, we apply the algorithm q to compute the value q˜=q(p˜1,,p˜n);

  2. 2.
    Then, for each i from 1 to n, we apply the algorithm q to compute the value
  3. 3.

    After that, we compute q¯=q˜+i=1n|qiq˜|; and

  4. 4.

    Finally, we check whether q¯t.

If q¯t, this means that the desired specifications are always satisfied. If q¯>t, this means that for some combinations of possible values pi, the specifications are not satisfied.

Possibility of a Further Speedup.

Equation (9) requires n+1 calls to the program that computes q for given values of parameters. In many practical situations, the program q takes a reasonably long time to compute, and the number of parameters is large. In such situations, the corresponding computations require a very long time.

A possibility to speed up the corresponding computations comes from the properties of the Cauchy distribution, i.e., a distribution with a probability density function
(11)

The possibility to use Cauchy distributions comes from the fact that they have the following property: if ηi are independent variables that are Cauchy distributed with parameters Δi, then for each tuple of real numbers c1,,cn, the linear combination i=1nci·ηi is also Cauchy distributed, with parameter Δ=i=1n|ci|·Δi.

Thus, we can find Δ as follows [8]:

  1. 1.

    First, for k=1,,N, we simulate random variables ηi(k) which are Cauchy distributed with parameters Δi.

  2. 2.
    For each k, we then estimate Δq(k)=i=1nci·ηi(k) as Δq(k)=q(k)q˜, where
    (12)
  3. 3.
    Based on the population of N values Δq(1),,Δq(N) which are Cauchy distributed with parameter Δ, we find this parameter, e.g., we can use the maximum-likelihood method according to which the desired value Δ can be found from the following equation:
    which can be easily solved by bisection if we start with the interval [Δ_,Δ¯] in which Δ_=0 and Δ¯=maxk|Δq(k)|;
  4. 4.

    Finally, we follow Eq. (6) and compute q¯=q˜+Δ (see, e.g., [8] for technical details).

In this Monte-Carlo-type technique, we need N+1 calls to the program that computes q. The accuracy of the resulting estimate depends only on the sample size N and not on the number of inputs n. Thus, for a fixed desired accuracy, when n is sufficiently large, this method requires much fewer calls to q and is, thus, much faster. For example, if we want to estimate Δ with relative accuracy 20%, then we need N=100 simulations, so for n200, this method is much faster than a straightforward application of Eq. (9).

For Many Practical Problems, We Can Achieve an Even Faster Speedup.

In both methods described in this section, we apply the original algorithm q(p1,,pn) several times: first, to the tuple of nominal values (p˜1,,p˜n) and then, to several other tuples (p˜1+η1,,p˜n+ηn). For example, in the linearized method (9), we apply the algorithm q to tuples (p˜1,,p˜i1,p˜i+Δi,p˜i+1,,p˜n) corresponding to i=1,,n.

In many practical cases, once we have computed the value q˜=q(p˜1,,p˜n), we can then compute the values
(13)
faster than by directly applying the algorithm q to the corresponding tuple. This happens, for example, if the algorithm for computing q(p1,,pn) solves a system of nonlinear equations Fj(q1,,qk,p1,,pn)=0, 1jk, and then returns q=q1.
In this case, once we know the values q˜j for which Fj(q˜1,,q˜k,p˜1,,p˜n)=0, we can find the values qj=q˜j+Δqj for which
(14)
by linearizing this system into an easy-to-solve system of linear equations
(15)
where ajj=defFj/qj and bji=defFj/pi.
A similar simplification is possible when the value q corresponding to given values p1,,pn comes from solving a system of nonlinear differential equations
(16)
In this case, once we find the solution x˜j(t) to the system of differential equations
(17)
corresponding to the nominal values, we do not need to explicitly solve the modified system of differential equations
(18)
to find the corresponding solution. Instead, we can take into account that the differences ηi are small; thus, the resulting differences Δxj(t)=defxj(t)x˜j(t) are small. So, we can linearize the resulting differential equations
(19)
into easier-to-solve linear equations
(20)
where ajj=deffj/xj and bji=deffj/pi.

This idea—known as local sensitivity analysis—is successfully used in many practical applications; see, e.g., [9,10].

Comment. As we have mentioned earlier, in this paper, we consider only situations in which the deviations Δpi from the nominal values are small. In some practical situations, some of these deviations are not small. In such situations, we can no longer use linearization, and we need to use global optimization techniques of global sensitivity; see, e.g., [9,10].

What If We Take Into Account Model Inaccuracy

Models are Rarely Exact.

Engineering systems are usually complex. As a result, it is rarely possible to find explicit expressions for q as a function of the parameters p1,,pn. Usually, we have some approximate computations. For example, if q is obtained by solving a system of partial differential equations, we use, e.g., the finite element method to find the approximate solution and thus, the approximate value of the quantity q.

How Model Inaccuracy is Usually Described.

In most practical situations, at best, we know the upper bound ϵ on the accuracy of the computational model. In such cases, for each tuple of parameters p1,,pn, once we apply the computational model and get the value Q(p1,,pn), the actual (unknown) value q(p1,,pn) of the quantity q satisfies the inequality
(21)

How This Model Inaccuracy Affects the Above Checking Algorithms: Analysis of the Problem.

Let us start with Eq. (9). This equation assumes that we know the exact values of q˜=q(p˜1,,p˜n) and qi (as defined by the Eq. (10)). Instead, we know the values
(22)
and
(23)
which are ϵ-close to the values q˜ and qi. We can apply Eq. (9) to these approximate values and get
(24)
Here, |Q˜q˜|ϵ and |Qiqi|ϵ, hence
and
By adding up all these inequalities, we conclude that
(25)
Thus, the only information that we have about the desired upper bound q¯ is that q¯B, where
(26)

Hence, we arrive at the following method.

How This Model Inaccuracy Affects the Above Checking Algorithms: Resulting Method.

We know:

  • An algorithm Q(p1,,pn) that, given the values of the parameters p1,,pn, computes the value of the quantity q with a known accuracy ϵ.

  • A threshold t that needs to be satisfied.

  • For each parameter pi, we know its nominal value p˜i and the largest possible deviation Δi from this nominal value.

Based on this information, we need to check whether q(p1,,pn)t for all possible combinations of values pi from the corresponding intervals [p˜iΔi,p˜i+Δi].

We can perform this checking as follows:

  1. 1.
    First, we apply the algorithm Q to compute the value
    (27)
  2. 2.
    Then, for each i from 1 to n, we apply the algorithm Q to compute the value
    (28)
  3. 3.
    After that, we compute
    (29)
  4. 4.

    Finally, we check whether Bt.

If Bt, this means that the desired specifications are always satisfied. If B>t, this means that we cannot guarantee that the specifications are always satisfied.

Comment 1. Please note that, in contrast to the case of the exact model, if B>t, this does not necessarily mean that the specifications are not satisfied: maybe they are satisfied, but we cannot check that because we know only approximate values of q.

Comment 2. Similar bounds can be found for the estimates based on the Cauchy distribution.

Comment 3. The above estimate B is not the best that we can get, but it has been proven that computing the best estimate would require unrealistic exponential time [11,12]—i.e., time that grows as 2s with the size s of the input; thus, when we consider only feasible algorithms, overestimation is inevitable.

Comment 4. Similar to the methods described in the previous section, instead of directly applying the algorithm Q to the modified tuples, we can, wherever appropriate, use the above-mentioned local sensitivity analysis technique.

Problem. When n is large, then, even for reasonably small inaccuracy ϵ, the value (2n+1)·ϵ is large.

In this paper, we show how we can get better estimates for the difference between the desired bound q˜ and the computed bound Q¯.

How to Get Better Estimates

Main Idea.

Model inaccuracy comes from the fact that we are using approximate methods to solve the corresponding equations.

Strictly speaking, the resulting inaccuracy is deterministic. However, in most cases, for all practical purposes, this inaccuracy can be viewed as random: when we select a different combination of parameters, we get an unrelated value of inaccuracy.

In other words, we can view the differences
(30)
corresponding to different tuples (p1,,pn) as independent random variables. In particular, this means that the differences Q˜q˜ and Qiqi are independent random variables.

Technical Details.

What is a probability distribution for these random variables?

All we know about each of these variables is that its values are located somewhere in the interval [ϵ,ϵ]. We do not have any reason to assume that some values from this interval are more probable than others, so it is reasonable to assume that all the values are equally probable, i.e., that we have a uniform distribution on this interval.

For this uniform distribution, the mean is 0, and the standard deviation is σ=ϵ/3.

Auxiliary Idea: How to Get a Better Estimate for q˜.

In our main algorithm, we apply the computational model Q to n+1 different tuples. What we suggest is to apply it to one more tuple (making it n+2 tuples), namely, computing an approximation
(31)
to the value
(32)
In the linearized case Eq. (4), one can easily check that
(33)
i.e.,
(34)
Thus, we can use the following equation to come up with a new estimate Q˜new for q˜
(35)
For the differences Δq¯new=defQ¯newq¯, Δq¯=defQ¯q¯, Δq˜=defQ˜q˜, Δqi=defQiqi, and Δm=defMm, we have the following equation:
(36)

The left-hand side is the arithmetic average of n+2 independent identically distributed random variables, with mean 0 and variance σ2=ϵ2/3. Hence (see, e.g., [13]), the mean of this average Δq˜new is the average of the means, i.e., 0, and the variance is equal to σ2=ϵ2/[3·(n+2)]ϵ2/3=σ2[Δq˜].

Thus, this average Q˜new is a more accurate estimation of the quantity q˜ than Q˜.

Let us Use This Better Estimate for q˜ When Estimating the Upper Bound q¯.

Because the average Q˜new is a more accurate estimation of the quantity q˜ than Q˜, let us use this average instead of Q˜ when estimating Q¯. In other words, instead of the estimate (24), let us use a new estimate
(37)

Let us estimate the accuracy of this new approximation.

Equation (9) can be described in the following equivalent form:
(38)
where si{1,1} are the signs of the differences qiq˜.
Similarly, we get
(39)
Thus, for the difference Δq¯=defQ¯newq¯, we have
(40)

Here, the differences Δq˜new and Δqi are independent random variables. According to the central limit theorem (see, e.g., [13]), for large n, the distribution of a linear combination of many independent random variables is close to Gaussian. The mean of the resulting distribution is the linear combination of the means, thus equal to zero.

The variance of a linear combination iki·ηi of independent random variables ηi with variances σi2 is equal to iki2·σi2. Thus, in our case, the variance σ2 of the difference Δq¯ is equal to
(41)
Here, because |si|1, we have |1i=1nsi|n+1, so (41) implies that
(42)
Here, (n+1)2/(n+2)(n+1)2/(n+1)=n+1, hence
(43)
For a normal distribution, with almost complete certainty, all the values are located no more than a certain user-defined number k0 standard deviations away from the mean: within 2σ with confidence 0.95, within 3σ with degree of confidence 0.999, and within 6σ with degree of confidence 1108. Thus, we can safely conclude that
(44)

Here, inaccuracy grows as 2n+1, which is much better than in the traditional approach, where it grows proportionally to 2n+1—and we achieve this drastic reduction of the overestimation, basically by using one more run of the program Q in addition to the previously used n+1 runs.

So, we arrive at the following method.

Resulting Method.

We know:

  • An algorithm Q(p1,,pn) that, given the values of the parameters p1,,pn, computes the value of the quantity q with a known accuracy ϵ.

  • A threshold t that needs to be satisfied.

  • For each parameter pi, we know its nominal value p˜i and the largest possible deviation Δi from this nominal value.

Based on this information, we need to check whether q(p1,,pn)t for all possible combinations of values pi from the corresponding intervals [p˜iΔi,p˜i+Δi].

We can perform this checking as follows:

  1. 1.
    First, we apply the algorithm Q to compute the value
    (45)
  2. 2.
    Then, for each i from 1 to n, we apply the algorithm Q to compute the value
    (46)
  3. 3.
    Then, we compute
    (47)
  4. 4.
    We compute
    (48)
  5. 5.
    We compute
    (49)
    where k0 depends on the level of confidence that we can achieve;
  6. 6.

    Finally, we check whether bt.

If bt, this means that the desired specifications are always satisfied. If b>t, this means that we cannot guarantee that the specifications are always satisfied.

Comment. For the Cauchy method, similarly, after computing Q˜=Q(p˜1,,p˜n) and Q(k)=Q(p˜1+η1(k),,p˜n+ηn(k)), we can compute the improved estimate Q˜new for q˜ as
(50)
and estimate the parameter Δ based on the more accurate differences Δqnew(k)=Q(k)Q˜new.

Experimental Testing

Description of the Case Study.

We tested our approach on the example of the seismic inverse problem in geophysics, where we need:

  • To reconstruct the velocity of sound vi at different spatial locations and at different depths.

  • Based on the times tj that it takes for a seismic signal to get from several setup explosions to different seismic stations.

In this example, we are interested in the velocity of sound q at different depths and at different locations. To estimate the desired velocity of sound q, as parameters p1,,pn, we use travel times tj.

For most materials, the velocity of sound is an increasing function of density (and of strength). Thus, e.g., in geotechnical engineering, the condition that the soil is strong enough to support a road or a building is often described in terms of a requirement that the corresponding velocity of sound exceeds a certain threshold: qt.

Comment. This inequality looks somewhat different from the usual requirement qt. However, as we will see, the algorithm actually produces the inverse values s=def1/v. In terms of the inverse values s, the requirement qt takes the usual form st0, where t0=def1/t.

Description of the Corresponding Algorithm.

As an algorithm Q(p1,,pn) for estimating the velocity of sound based on the measured travel times p1,,pn, we used (a somewhat improved version of) the finite element technique that was originated by Hole [14]; the resulting techniques are described in Refs. [15–17].

According to Hole’s algorithm, we divide the three-dimensional volume of interest (in which we want to find the corresponding velocities) into a rectangular three-dimensional grid of N small cubic cells. We assume that the velocity is constant within each cube; the value of velocity in the ith cube is denoted by vi. Each observation j means that we know the time tj that it took the seismic wave to go from the site of the corresponding explosion to the location of the observing sensor.

This algorithm is iterative. We start with the first-approximation model of the Earth, namely, with geology-motivated approximate values vi(1). At each iteration a, we start with the values vi(a) and produce the next approximation vi(a+1) as follows.

First, based on the latest approximation vi(a), we simulate how the seismic waves propagate from the explosion site to the sensor locations. In the cube that contains the explosion site, the seismic signal propagates in all directions. When the signal’s trajectory approaches the border between the two cubes i and i, the direction of the seismic wave changes in accordance with Snell’s law sin(θi)/sin(θi)=vi(a)/vi(a), where θi is the angle between the seismic wave’s trajectory in the ith cube and the vector orthogonal to the plane separating the two cubes. Snell’s law enables us to find the trajectory’s direction in the next cube i. Once the way reaches the location of the sensor, we can estimate the travel time as tj(a)=iji/vi(a), where the sum is taken over all the cubes through which this trajectory passes, and ji is the length of the part of the trajectory that lies in the ith cube.

Each predicted value tj(a) is, in general, different from the observed value tj. To compensate for this difference, the velocity model vi(a) is corrected: namely, the inverse value si(a)=def1/vi(a) is replaced with an updated value
(51)
where the sum is taken over all trajectories that pass through the ith cube, ni is the overall number of such trajectories, and Lj=iji is the overall length of the jth trajectory.
Iterations stop when the process converges, e.g., it is reasonable to stop the process when the velocity models obtained on two consecutive iterations become close:
(52)
for some small value ϵ>0.

Reasonable Way to Gauge the Quality of the Resulting Estimate for the Velocity of Sound vi.

A perfect solution would be to compare our estimates with the actual velocity of sound at different depths and different locations. This is, in principle, possible: we can drill several wells that are in different locations and directly measure the velocity of sound at different depths. In practice, however, such a drilling is extremely expensive—this is why we use the seismic experiment to measure this velocity indirectly.

Because we cannot directly gauge the accuracy of our approximations, we can gauge this accuracy indirectly. Indeed, the main objective of the above iterative algorithm is to match the observations. It is therefore reasonable to gauge the quality of the resulting estimate by how well the predicted travel times t˜i(=ti(a)) match the observations; usually, by the root-mean-square (RMS) approximation error

This is indeed a practical problem in which it is important to take model inaccuracy into account. In this problem, there are two sources of uncertainty.

The first is the uncertainty with which we can measure each travel time tj. The travel time is the difference between the time when the signal arrives at the sensor location and the time of the artificially set explosion. The explosion time is known with a very high accuracy, but the arrival time is not. In the ideal situation, when the only seismic signal comes from the our explosion, we could exactly pinpoint the arrival time as the time when the sensor starts detecting a signal. In real-life, there is always a background noise, so we can determine the arrival time only with some inaccuracy.

The second source of uncertainty comes from the fact that our discrete model is only an approximate description of the continuous real Earth. Experimental data show that this second type of uncertainty is important, and it cannot be safely ignored.

Thus, our case study is indeed a particular case of a problem in which it is important to take model inaccuracy into account.

Estimating Uncertainty of the Result of Data Processing: Traditional Approach.

To compare the new method with the previously known techniques, we use the uncertainty estimate for this problem performed in Refs. [15–17], where we used the Cauchy-based techniques to estimate how the measurement uncertainty affects the results of data processing.

In accordance with this algorithm, first, we computed the value Q˜=Q(p˜1,,p˜n) by applying the above-modified Hole algorithm Q(p1,,pn) to the measured travel times q˜i=t˜i.

After that, we simulated the Cauchy-distributed random variables ηi(k) and applied the same Hole algorithm to the perturbed values p˜i+ηi(k), computing the values Q(k)=Q(p˜1+η1(k),,p˜n+ηn(k)). Based on the differences Δq(k)=defQ(k)Q˜, we then estimated the desired approximation error Δ.

Let us Now Apply the New Approach to the Same Problem.

In the new approach, instead of using the original value Q˜=Q(p˜1,,p˜n), we use a new estimate Q˜new—which is computed using Eq. (50).

Then, instead of using the original differences Δq(k)=defQ(k)Q˜, we use the new differences Δqnew(k)=defQ(k)Q˜new. After that, we estimate the value Δnew by applying the maximum-likelihood method to these new differences.

Which Estimate is More Accurate.

To check which estimates for the velocity of sound are more accurate—the estimates Q˜ produced by the traditional method or the estimates Q˜new produced by the new method—we use the above criterion for gauging the quality of different estimates. Specifically, for each of the two methods, we computed the RMS approximation error 1/n·i(t˜iti)2 describing how well the travel times tt˜i predicted based on the estimated velocities of sound match the observations ti.

We performed this comparison 16 times. In one case, the RMS approximation error increased (and not much, only by 7%). In all other 15 cases, the RMS approximation error decreased, and it decreased, on average, by 15%.

This result shows that the new method indeed leads to more accurate predictions.

Future Work: Can We Further Improve the Accuracy

How to Improve the Accuracy: A Straightforward Approach.

As we have mentioned, the inaccuracy Qq is caused by the fact that we are using a finite element method with a finite size element. In the traditional finite element method, when we assume that the values of each quantity within each element are constant, this inaccuracy comes from the fact that we ignore the difference between the values of the corresponding parameters within each element. For elements of linear size h, this inaccuracy Δx is proportional to x·h, where x is the spatial derivative of x. In other words, the inaccuracy is proportional to the linear size h.

A straightforward way to improve the accuracy is to decrease h. For example, if we reduce h to h2, then we decrease the resulting model inaccuracy ϵ to ϵ/2.

This decrease requires more computations. The number of computations is, crudely speaking, proportional to the number of nodes. Because the elements fill the original area and each element has volume h3, the number of such elements is proportional to h3.

So, if we go from the original value h to the smaller value h, then we increase the number of computations by a factor of
(53)

This leads to decreasing the inaccuracy by a factor of h/h, which is equal to K3.

For example, in this straightforward approach, if we want to decrease the accuracy in half (K3=2), we will have to increase the number of computation steps by a factor of K=8.

Alternative Approach: Description.

An alternative approach is as follows. We select K small vectors (Δ1(k),,Δn(k)), 1kK, which add up to zero. For example, we can arbitrarily select the first K1 vectors and take
Then, every time we need to estimate the value q(p1,,pn), instead of computing Q(p1,,pn), we compute the average
(54)

Why This Approach Decreases Inaccuracy.

We know that Q(p1+Δp1,,pn+Δpn)=q(p1+Δp1,,pn+Δpn)+δq, where, in the small vicinity of the original tuple (p1,,pn), the expression q(p1+Δp1,,pn+Δpn) is linear, and the differences δq are independent random variables with zero mean.

Thus, we have
(55)

Due to linearity and the fact that k=1KΔi(k)=0, the first average in Eq. (55) is equal to q(p1,,pn). The second average is the average of K independent identically distributed random variables, and we have already recalled that this averaging decreases the inaccuracy by a factor of K.

Thus, in this alternative approach, we increase the amount of computations by a factor of K, and as a result, we decrease the inaccuracy by a factor of K.

New Approach is Better Than the Straightforward One.

In general, K>K3. Thus, with the same increase in computation time, the new method provides a better improvement in accuracy than the straightforward approach.

Comment. The above computations refer only to the traditional finite element approach, when we approximate each quantity within each element by a constant. In many real-life situations, it is useful to approximate each quantity within each element not by a constant, but rather by a polynomial of a given order (see, e.g., [18]): by a linear function and by a quadratic function. In this case, for each element size h, we have smaller approximation error but larger amount of computations. It is desirable to extend the above analysis to such techniques as well.

Acknowledgment

This work was supported in part by the National Science Foundation grants HRD-0734825 and HRD-1242122 (Cyber-ShARE Center of Excellence) and DUE-0926721. The authors are greatly thankful to the anonymous referees for valuable suggestions.

References

1.
Elishakoff
,
I.
,
Fu
,
C. M.
,
Jiang
,
C.
,
Ni
,
B. Y.
,
Han
,
X.
, and
Chen
,
G. S.
,
2015
, “
Comparison of Uncertainty Analyses for Crankshaft Applications
,”
ASCE-ASME J. Risk Uncertainty Eng. Syst. Part B: Mech. Eng.
,
1
(
4
), p.
041002
.10.1115/1.4030436
2.
Muhanna
,
R. L.
,
Mullen
,
R. L.
, and
Rama Rao
,
M. V.
,
2015
, “
Nonlinear Finite Element Analysis of Frames Under Interval Material and Load Uncertainty
,”
ASCE-ASME J. Risk Uncertainty Eng. Syst. Part B: Mech. Eng.
,
1
(
4
), p.
041003
.10.1115/1.4030609
3.
Muscolino
,
G.
,
Santoro
,
R.
, and
Sofi
,
A.
,
2016
, “
Interval Fractile Levels for Stationary Stochastic Response of Linear Structures With Uncertainty
,”
ASCE-ASME J. Risk Uncertainty Eng. Syst. Part B: Mech. Eng.
,
2
(
1
), p.
011004
.10.1115/1.4030455
4.
Tangaramvong
,
S.
,
Tin-Loi
,
F.
,
Song
,
C. M.
, and
Gao
,
W.
,
2015
, “
Interval Limit Analysis Within a Scaled Boundary Element Framework
,”
ASCE-ASME J. Risk Uncertainty Eng. Syst. Part B: Mech. Eng.
,
1
(
4
), p.
041004
.10.1115/1.4030471
5.
Kuhn
,
D. R.
,
Kacker
,
R. N.
, and
Lei
,
Y.
,
2010
, “
Practical Combinatorial Testing
.”
U.S. National Institute of Science and Technology (NIST)
, , Washington, DC.
6.
Rabinovich
,
S.
,
2005
,
Measurement Errors and Uncertainties: Theory and Practice
,
Springer Verlag
,
New York
.
7.
Kreinovich
,
V.
,
2009
, “Interval Computations and Interval-Related Statistical Techniques: Tools for Estimating Uncertainty of the Results of Data Processing and Indirect Measurements,”
Data Modeling for Metrology and Testing in Measurement Science
, F. Pavese and A. B. Forbes (eds.),
Birkhauser-Springer
,
Boston
, pp. 
117
145
.
8.
Kreinovich
,
V.
, and
Ferson
,
S.
,
2004
, “
A New Cauchy-Based Black-Box Technique for Uncertainty in Risk Analysis
,”
Reliab. Eng. Syst. Saf.
,
85
(
1–3
), pp. 
267
279
.10.1016/j.ress.2004.03.016
9.
Cacuci
,
D. G.
,
2007
,
Sensitivity and Uncertainty Analysis: Theory
,
Chapman & Hall/CRC
,
Boca Raton, FL
.
10.
Saltelli
,
A.
,
Chan
,
K.
, and
Scott
,
E. M.
,
2009
,
Sensitivity Analysis
,
Wiley
,
Chichester, U.K.
11.
Kreinovich
,
V.
,
1994
, “
Error Estimation for Indirect Measurements is Exponentially Hard
,”
Neural Parallel Scientific Comput.
,
2
(
2
), pp. 
225
234
.
12.
Kreinovich
,
V.
,
Lakeyev
,
A.
,
Rohn
,
J.
, and
Kahl
,
P.
,
1997
,
Computational Complexity and Feasibility of Data Processing and Interval Computations
,
Kluwer
,
Dordrecht
.
13.
Sheskin
,
D. J.
,
2011
,
Handbook of Parametric and Nonparametric Statistical Procedures
,
Chapman & Hall/CRC
,
Boca Raton, FL
.
14.
Hole
,
J. A.
,
1992
, “
Nonlinear High-Resolution Three-Dimensional Seismic Travel Time Tomography
,”
J. Geophys. Res.
,
97
(
B5
), pp. 
6553
6562
.10.1029/92JB00235
15.
Averill
,
M. G.
,
2007
, “
Lithospheric Investigation of the Southern Rio Grande Rift
,” Ph.D. dissertation,
Department of Geological Sciences, University of Texas at El Paso
, El Paso, TX.
16.
Kreinovich
,
V.
,
Beck
,
J.
,
Ferregut
,
C.
,
Sanchez
,
A.
,
Keller
,
G. R.
,
Averill
,
M.
, and
Starks
,
S. A.
,
2007
, “
Monte-Carlo-Type Techniques for Processing Interval Uncertainty, and Their Potential Engineering Applications
,”
Reliable Comput.
,
13
(
1
), pp. 
25
69
.10.1007/s11155-006-9021-6
17.
Pinheiro da Silva
,
P.
,
Velasco
,
A. A.
,
Ceberio
,
M.
,
Servin
,
C.
,
Averill
,
M. G.
,
Del Rio
,
N. R.
,
Longpré
,
L.
and
Kreinovich
,
V.
,
2008
, “
Propagation and Provenance of Probabilistic and Interval Uncertainty in Cyberinfrastructure-Related Data Processing and Data Fusion
,”
Proceedings of the International Workshop on Reliable Engineering Computing REC’08
, R. L. Muhanna and R. L. Mullen (eds.),
Savannah, Georgia
, pp. 
199
234
.
18.
Solin
,
P.
,
Segeth
,
K.
, and
Dolezel
,
I.
,
2003
,
Higher-Order Finite Element Methods
,
Chapman & Hall/CRC
,
Boca Raton, FL
.