In engineering design, it is important to guarantee that the values of certain quantities such as stress level, noise level, and vibration level, stay below a certain threshold in all possible situations, i.e., for all possible combinations of the corresponding internal and external parameters. Usually, the number of possible combinations is so large that it is not possible to physically test the system for all these combinations. Instead, we form a computer model of the system and test this model. In this testing, we need to take into account that the computer models are usually approximate. In this paper, we show that the existing techniques for taking model uncertainty into account overestimate the uncertainty of the results. We also show how we can get more accurate estimates.
Introduction
Bounds on Unwanted Processes: An Important Part of Engineering Specifications.
An engineering system is designed to perform certain tasks. In the process of performing these tasks, the system also generates some undesirable side effects: it can generate noise, vibration, heat, and stress.
We cannot completely eliminate these undesired effects, but specifications for an engineering system usually require that the size of each of these effects does not exceed a certain predefined threshold (bound) . It is therefore important to check that this specification is always satisfied, i.e., in all possible situations.
How Can We Check That Specifications are Satisfied for All Possible Situations: Simulations are Needed.
To fully describe each situation, we need to know which parameters characterize this situation, and we need to know the exact values of all these parameters. These may be external parameters such as wind speed and load for a bridge. This may be internal parameters such as the exact value of the Young’s module for a material used in the design.
Thus, without losing generality, we can always assume that the set of possible values of each parameter is given by Eq. (1).
We would like to make sure that the quantity satisfies the desired inequality for all possible combinations of values . Usually, there are many such parameters, and thus, there are many possible combinations—even if we limit ourselves to extreme cases, when each parameter is equal to either or , we will still get possible combinations. It is therefore not feasible to physically check how the system behaves under all such combination. Instead, we need to rely on computer simulations.
Formulation of the Problem.
There are known techniques for using computer simulation to check that the system satisfies the given specifications for all possible combinations of these parameters; see, e.g., [5] and references therein. These techniques, however, have been originally designed for the case in which we have an exact model of the system.
In principle, we can also use these techniques in more realistic situations, when the corresponding model is only approximate. However, as we show in this paper, the use of these techniques leads to overestimation of the corresponding uncertainty. We also show that a proper modification of these techniques leads to a drastic decrease of this overestimation and thus to more accurate estimations.
How to Check Specifications When We Have an Exact Model of a System: Reminder
Case of an Exact Model: Description.
In Most Engineering Situations, Deviations From Nominal Values are Small.
Usually, possible deviations from nominal values are reasonably small; see, e.g., [6]. In this paper, we will restrict ourselves to such situations.
How to Use the Linearized Model to Check That Specifications are Satisfied: Analysis of the Problem.
To make sure that we always have , we need to guarantee that the largest possible value of the function does not exceed .
How to Estimate the Derivatives ci
In many practical cases, we have an explicit equation or, more generally, a known program for computing . In this case, by explicitly differentiating the corresponding expression—or by applying an automatic differentiation algorithm to the corresponding program—we can get equations for computing the derivatives .
In many real-life situations, however, in our computations, we use proprietary software—for which the corresponding program is not available to us. In such situations, we cannot use automatic differentiation tools, and we can use only the results of applying the algorithm to different tuples.
Thus, we arrive at the following technique (see, e.g., [7]).
How to Use the Linearized Model to Check That Specifications are Satisfied: Resulting Technique.
We know:
- •
An algorithm that, given the values of the parameters , computes the value of the quantity .
- •
A threshold that needs to be satisfied.
- •
For each parameter , we know its nominal value and the largest possible deviation from this nominal value.
Based on this information, we need to check whether for all possible combinations of values from the corresponding intervals .
We can perform this checking as follows:
- 1.
First, we apply the algorithm to compute the value ;
- 2.Then, for each from 1 to , we apply the algorithm to compute the value
- 3.
After that, we compute ; and
- 4.
Finally, we check whether .
If , this means that the desired specifications are always satisfied. If , this means that for some combinations of possible values , the specifications are not satisfied.
Possibility of a Further Speedup.
Equation (9) requires calls to the program that computes for given values of parameters. In many practical situations, the program takes a reasonably long time to compute, and the number of parameters is large. In such situations, the corresponding computations require a very long time.
The possibility to use Cauchy distributions comes from the fact that they have the following property: if are independent variables that are Cauchy distributed with parameters , then for each tuple of real numbers , the linear combination is also Cauchy distributed, with parameter .
Thus, we can find as follows [8]:
- 1.
First, for , we simulate random variables which are Cauchy distributed with parameters .
- 2.For each , we then estimate as , where(12)
- 3.Based on the population of values which are Cauchy distributed with parameter , we find this parameter, e.g., we can use the maximum-likelihood method according to which the desired value can be found from the following equation:which can be easily solved by bisection if we start with the interval in which and ;
- 4.
Finally, we follow Eq. (6) and compute (see, e.g., [8] for technical details).
In this Monte-Carlo-type technique, we need calls to the program that computes . The accuracy of the resulting estimate depends only on the sample size and not on the number of inputs . Thus, for a fixed desired accuracy, when is sufficiently large, this method requires much fewer calls to and is, thus, much faster. For example, if we want to estimate with relative accuracy 20%, then we need simulations, so for , this method is much faster than a straightforward application of Eq. (9).
For Many Practical Problems, We Can Achieve an Even Faster Speedup.
In both methods described in this section, we apply the original algorithm several times: first, to the tuple of nominal values and then, to several other tuples . For example, in the linearized method (9), we apply the algorithm to tuples corresponding to .
This idea—known as local sensitivity analysis—is successfully used in many practical applications; see, e.g., [9,10].
Comment. As we have mentioned earlier, in this paper, we consider only situations in which the deviations from the nominal values are small. In some practical situations, some of these deviations are not small. In such situations, we can no longer use linearization, and we need to use global optimization techniques of global sensitivity; see, e.g., [9,10].
What If We Take Into Account Model Inaccuracy
Models are Rarely Exact.
Engineering systems are usually complex. As a result, it is rarely possible to find explicit expressions for as a function of the parameters . Usually, we have some approximate computations. For example, if is obtained by solving a system of partial differential equations, we use, e.g., the finite element method to find the approximate solution and thus, the approximate value of the quantity .
How Model Inaccuracy is Usually Described.
How This Model Inaccuracy Affects the Above Checking Algorithms: Analysis of the Problem.
Hence, we arrive at the following method.
How This Model Inaccuracy Affects the Above Checking Algorithms: Resulting Method.
We know:
- •
An algorithm that, given the values of the parameters , computes the value of the quantity with a known accuracy .
- •
A threshold that needs to be satisfied.
- •
For each parameter , we know its nominal value and the largest possible deviation from this nominal value.
Based on this information, we need to check whether for all possible combinations of values from the corresponding intervals .
We can perform this checking as follows:
- 1.First, we apply the algorithm to compute the value(27)
- 2.Then, for each from 1 to , we apply the algorithm to compute the value(28)
- 3.After that, we compute(29)
- 4.
Finally, we check whether .
If , this means that the desired specifications are always satisfied. If , this means that we cannot guarantee that the specifications are always satisfied.
Comment 1. Please note that, in contrast to the case of the exact model, if , this does not necessarily mean that the specifications are not satisfied: maybe they are satisfied, but we cannot check that because we know only approximate values of .
Comment 2. Similar bounds can be found for the estimates based on the Cauchy distribution.
Comment 3. The above estimate is not the best that we can get, but it has been proven that computing the best estimate would require unrealistic exponential time [11,12]—i.e., time that grows as with the size of the input; thus, when we consider only feasible algorithms, overestimation is inevitable.
Comment 4. Similar to the methods described in the previous section, instead of directly applying the algorithm to the modified tuples, we can, wherever appropriate, use the above-mentioned local sensitivity analysis technique.
Problem. When is large, then, even for reasonably small inaccuracy , the value is large.
In this paper, we show how we can get better estimates for the difference between the desired bound and the computed bound .
How to Get Better Estimates
Main Idea.
Model inaccuracy comes from the fact that we are using approximate methods to solve the corresponding equations.
Strictly speaking, the resulting inaccuracy is deterministic. However, in most cases, for all practical purposes, this inaccuracy can be viewed as random: when we select a different combination of parameters, we get an unrelated value of inaccuracy.
Technical Details.
What is a probability distribution for these random variables?
All we know about each of these variables is that its values are located somewhere in the interval . We do not have any reason to assume that some values from this interval are more probable than others, so it is reasonable to assume that all the values are equally probable, i.e., that we have a uniform distribution on this interval.
For this uniform distribution, the mean is 0, and the standard deviation is .
Auxiliary Idea: How to Get a Better Estimate for q˜.
The left-hand side is the arithmetic average of independent identically distributed random variables, with mean 0 and variance . Hence (see, e.g., [13]), the mean of this average is the average of the means, i.e., 0, and the variance is equal to .
Thus, this average is a more accurate estimation of the quantity than .
Let us Use This Better Estimate for q˜ When Estimating the Upper Bound q¯.
Let us estimate the accuracy of this new approximation.
Here, the differences and are independent random variables. According to the central limit theorem (see, e.g., [13]), for large , the distribution of a linear combination of many independent random variables is close to Gaussian. The mean of the resulting distribution is the linear combination of the means, thus equal to zero.
Here, inaccuracy grows as , which is much better than in the traditional approach, where it grows proportionally to —and we achieve this drastic reduction of the overestimation, basically by using one more run of the program in addition to the previously used runs.
So, we arrive at the following method.
Resulting Method.
We know:
- •
An algorithm that, given the values of the parameters , computes the value of the quantity with a known accuracy .
- •
A threshold that needs to be satisfied.
- •
For each parameter , we know its nominal value and the largest possible deviation from this nominal value.
Based on this information, we need to check whether for all possible combinations of values from the corresponding intervals .
We can perform this checking as follows:
- 1.First, we apply the algorithm to compute the value(45)
- 2.Then, for each from 1 to , we apply the algorithm to compute the value(46)
- 3.Then, we compute(47)
- 4.We compute(48)
- 5.We computewhere depends on the level of confidence that we can achieve;(49)
- 6.
Finally, we check whether .
If , this means that the desired specifications are always satisfied. If , this means that we cannot guarantee that the specifications are always satisfied.
Experimental Testing
Description of the Case Study.
We tested our approach on the example of the seismic inverse problem in geophysics, where we need:
- •
To reconstruct the velocity of sound at different spatial locations and at different depths.
- •
Based on the times that it takes for a seismic signal to get from several setup explosions to different seismic stations.
In this example, we are interested in the velocity of sound at different depths and at different locations. To estimate the desired velocity of sound , as parameters , we use travel times .
For most materials, the velocity of sound is an increasing function of density (and of strength). Thus, e.g., in geotechnical engineering, the condition that the soil is strong enough to support a road or a building is often described in terms of a requirement that the corresponding velocity of sound exceeds a certain threshold: .
Comment. This inequality looks somewhat different from the usual requirement . However, as we will see, the algorithm actually produces the inverse values . In terms of the inverse values , the requirement takes the usual form , where .
Description of the Corresponding Algorithm.
As an algorithm for estimating the velocity of sound based on the measured travel times , we used (a somewhat improved version of) the finite element technique that was originated by Hole [14]; the resulting techniques are described in Refs. [15–17].
According to Hole’s algorithm, we divide the three-dimensional volume of interest (in which we want to find the corresponding velocities) into a rectangular three-dimensional grid of small cubic cells. We assume that the velocity is constant within each cube; the value of velocity in the th cube is denoted by . Each observation means that we know the time that it took the seismic wave to go from the site of the corresponding explosion to the location of the observing sensor.
This algorithm is iterative. We start with the first-approximation model of the Earth, namely, with geology-motivated approximate values . At each iteration , we start with the values and produce the next approximation as follows.
First, based on the latest approximation , we simulate how the seismic waves propagate from the explosion site to the sensor locations. In the cube that contains the explosion site, the seismic signal propagates in all directions. When the signal’s trajectory approaches the border between the two cubes and , the direction of the seismic wave changes in accordance with Snell’s law , where is the angle between the seismic wave’s trajectory in the th cube and the vector orthogonal to the plane separating the two cubes. Snell’s law enables us to find the trajectory’s direction in the next cube . Once the way reaches the location of the sensor, we can estimate the travel time as , where the sum is taken over all the cubes through which this trajectory passes, and is the length of the part of the trajectory that lies in the th cube.
Reasonable Way to Gauge the Quality of the Resulting Estimate for the Velocity of Sound vi.
A perfect solution would be to compare our estimates with the actual velocity of sound at different depths and different locations. This is, in principle, possible: we can drill several wells that are in different locations and directly measure the velocity of sound at different depths. In practice, however, such a drilling is extremely expensive—this is why we use the seismic experiment to measure this velocity indirectly.
This is indeed a practical problem in which it is important to take model inaccuracy into account. In this problem, there are two sources of uncertainty.
The first is the uncertainty with which we can measure each travel time . The travel time is the difference between the time when the signal arrives at the sensor location and the time of the artificially set explosion. The explosion time is known with a very high accuracy, but the arrival time is not. In the ideal situation, when the only seismic signal comes from the our explosion, we could exactly pinpoint the arrival time as the time when the sensor starts detecting a signal. In real-life, there is always a background noise, so we can determine the arrival time only with some inaccuracy.
The second source of uncertainty comes from the fact that our discrete model is only an approximate description of the continuous real Earth. Experimental data show that this second type of uncertainty is important, and it cannot be safely ignored.
Thus, our case study is indeed a particular case of a problem in which it is important to take model inaccuracy into account.
Estimating Uncertainty of the Result of Data Processing: Traditional Approach.
To compare the new method with the previously known techniques, we use the uncertainty estimate for this problem performed in Refs. [15–17], where we used the Cauchy-based techniques to estimate how the measurement uncertainty affects the results of data processing.
In accordance with this algorithm, first, we computed the value by applying the above-modified Hole algorithm to the measured travel times .
After that, we simulated the Cauchy-distributed random variables and applied the same Hole algorithm to the perturbed values , computing the values . Based on the differences , we then estimated the desired approximation error .
Let us Now Apply the New Approach to the Same Problem.
In the new approach, instead of using the original value , we use a new estimate —which is computed using Eq. (50).
Then, instead of using the original differences , we use the new differences . After that, we estimate the value by applying the maximum-likelihood method to these new differences.
Which Estimate is More Accurate.
To check which estimates for the velocity of sound are more accurate—the estimates produced by the traditional method or the estimates produced by the new method—we use the above criterion for gauging the quality of different estimates. Specifically, for each of the two methods, we computed the RMS approximation error describing how well the travel times predicted based on the estimated velocities of sound match the observations .
We performed this comparison 16 times. In one case, the RMS approximation error increased (and not much, only by 7%). In all other 15 cases, the RMS approximation error decreased, and it decreased, on average, by 15%.
This result shows that the new method indeed leads to more accurate predictions.
Future Work: Can We Further Improve the Accuracy
How to Improve the Accuracy: A Straightforward Approach.
As we have mentioned, the inaccuracy is caused by the fact that we are using a finite element method with a finite size element. In the traditional finite element method, when we assume that the values of each quantity within each element are constant, this inaccuracy comes from the fact that we ignore the difference between the values of the corresponding parameters within each element. For elements of linear size , this inaccuracy is proportional to , where is the spatial derivative of . In other words, the inaccuracy is proportional to the linear size .
A straightforward way to improve the accuracy is to decrease . For example, if we reduce to , then we decrease the resulting model inaccuracy to .
This decrease requires more computations. The number of computations is, crudely speaking, proportional to the number of nodes. Because the elements fill the original area and each element has volume , the number of such elements is proportional to .
This leads to decreasing the inaccuracy by a factor of , which is equal to .
For example, in this straightforward approach, if we want to decrease the accuracy in half , we will have to increase the number of computation steps by a factor of .
Alternative Approach: Description.
Why This Approach Decreases Inaccuracy.
We know that , where, in the small vicinity of the original tuple , the expression is linear, and the differences are independent random variables with zero mean.
Due to linearity and the fact that , the first average in Eq. (55) is equal to . The second average is the average of independent identically distributed random variables, and we have already recalled that this averaging decreases the inaccuracy by a factor of .
Thus, in this alternative approach, we increase the amount of computations by a factor of , and as a result, we decrease the inaccuracy by a factor of .
New Approach is Better Than the Straightforward One.
In general, . Thus, with the same increase in computation time, the new method provides a better improvement in accuracy than the straightforward approach.
Comment. The above computations refer only to the traditional finite element approach, when we approximate each quantity within each element by a constant. In many real-life situations, it is useful to approximate each quantity within each element not by a constant, but rather by a polynomial of a given order (see, e.g., [18]): by a linear function and by a quadratic function. In this case, for each element size , we have smaller approximation error but larger amount of computations. It is desirable to extend the above analysis to such techniques as well.
Acknowledgment
This work was supported in part by the National Science Foundation grants HRD-0734825 and HRD-1242122 (Cyber-ShARE Center of Excellence) and DUE-0926721. The authors are greatly thankful to the anonymous referees for valuable suggestions.