In manufacturing, tolerance analysis, also known as variational analysis, is a critical step in a design cycle for mechanical products. A mechanical engineer, design engineer, or possibly a product engineer determines the values for geometries and dimensions of an assembly and for each of its individual components — as well as how much they can vary (in other words their tolerance values). The specification of these values is also known as geometric dimensioning and tolerancing (GD&T). Getting the right GD&T values is important so that a product robustly meets its performance requirements for customers, can be properly assembled, and minimizes the costs of production.
The designer can choose to apply traditional tolerance analysis, which focuses on ensuring fit for assembly, or functional tolerance analysis, which also ensures fit but additionally evaluates how robustly a product meets its functional performance requirements, such as for applied forces or specific motions. Traditional is typically run near the end of detailed design process within a CAD system. Functional analysis is usually run before detailed design begins to determine robust GD&T values, and then is frequently run along with detailed CAD design.
Whether you are undertaking traditional fit-based analysis or functional tolerance analysis, there are several methods for calculating the sum of possible variances in any single direction within a stackup of assembled components so that a designer can make optimal decisions on tolerance values. These stackup calculations are key to ensuring that the manufactured products will reliably perform to customer specifications, such as for force levels or specific ranges of motion. To help quantify the GD&T values needed to robustly deliver on those specifications, designers typically complete a Failure Modes and Effects Analysis, or FMEA, which identifies functional failure modes. The stackup calculations are integral to FMEAs.
Typically a designer needs to analyze stackups in multiple directions. Which method, or combination of methods, to use for each direction will depend on factors such as how critical a stackup is to the fit and performance of the product, how many components there are in a stackup, or the complexity of geometries.
This article overviews three of the most common methods: worst case, root sum squares (RSS), and Monte Carlo simulation. Software products for tolerance analysis, like Enventive Concept, build in all three of these methods so that designers can very quickly apply any combination of them to their decision making. The article also points to additional, less common, techniques to be aware of, including process tolerancing and inertial tolerancing. The contents below includes:
- Worst-case stack-up calculations
- Statistical stack-up calculations
- RSS stack-up calculations
- Monte Carlo simulation stack-up calculations
- Additional stack-up calculation methods
WORST-CASE (WC) STACK-UP CALCULATIONS
Worst-case methods calculate the maximum deviation of geometric values that result from the sum of variations for each connected component and then use the results to determine if there is a potential failure condition.
So, for example, for a hinge made up of a pin, with a smaller diameter extension or boss, that fits into bracket holes, there is a gap between where the smaller diameter begins and the inside surface of the bracket. If the gap is too small, then the combined components will be too tight for correct assembly.
A worst-case analysis to check for this too-tight failure condition will calculate the worst-case gap by:
- calculating the longest possible length of the large diameter part of the pin (dimension Amax in the adjacent drawing);
- calculate the shortest possible length of the mounting bracket’s interior using the difference between the minimum total exterior length (dimension Bmin) and maximum thickness of the bracket support where the pin’s extension is mounted (dimension Cmax); and then
- determine the smallest possible gap (dimension Gmin) by subtracting the worst-case longest pin length from the worst-case shortest interior length.
Here’s the math for this calculation: Gmin = (Bmin – Cmax) – Amax
If that calculated worst-case gap is too tight versus specifications, you have a potential failure mode.
Similarly you can check for a worst-case too-loose condition. That too-loose condition can still be assembled but could easily create a product performance failure mode, such as excessive side-to-side movement of the hinge as it rotates. Or if we were doing this kind of a calculation for an electric motor shaft and housing, a too-loose condition could lead to unacceptable noise or vibration.
In practice, worst case methods often lead to costly reductions in the specified tolerance values for those components. The likelihood of all those components being at either their highest or lowest tolerance value all at the same time is usually very low. If you design to the worst-case scenario, then you could easily end up with very small tolerance levels for each component, which will add to the cost of manufacturing.
However, if a particular failure mode is critical, such as for safety reasons, the designer might decide to set all the tolerances in a stackup direction using worst case analysis.
Statistical stack-up calculations
As a result of of worst case methods frequently leading to excessively small and costly tolerancing, statistical techniques for estimating the probability of combined component variations leading to a failure, such as an assembly being too loose or tight, were introduced in the early 1900s. Two of the most common statistical methods are RSS and Monte Carlo simulation.
They both estimate cumulative tolerances for a particular direction within a stack of components, also known as a tolerance stack, so that you can determine a probability of an assembly reaching a failure mode, such as being too loose or too tight with respect to its required fit.
rss stack-up calculations
With RSS, the method assumes a normal statistical distribution for each component’s geometric and dimensional values. This is the classic bell curve, also known as Gaussian or normal distribution, with a mean and standard deviation. By adding the means and combining the root sum square of the standard deviations, this method gives an estimate of the variation for an entire tolerance stack. From that normal distribution, an estimate for the failure rate of the stack is returned.
A second RSS assumption is linearity of geometries within the stackup. For example a mechanism with a cam component would have a non-linear shape of its contact surface and be poorly suited for RSS methods. Other methods, such as Monte Carlo simulation (see below) are well suited to model non-linearities.
RSS has proven to be a fast and effective calculation method for analyzing combined tolerances within stack ups. Like any modeling approach, RSS is an approximation of the reality of the actual system. However, Enventive finds that users of its Concept tolerance analysis software most commonly use RSS. Users value RSS’s immediate calculation results for rapid iterative decision making.
RSS also supports important design decisions by representing the sensitivities of component GD&T values on both fit and function of an assembly. If, for example, a mechanism’s component has a lever effect for which even very small geometric or dimensional variations can lead to a failure in a required transmittal force and/or assembly fit, RSS can quickly quantify this sensitivity and enable the designer to iterate on that component’s GD&T values to adjust the sensitivity to an acceptable level.
Below is an example of an RSS tolerance analysis report from Enventive’s Concept software. The analysis is for a stackup of components within an automotive assembly. Each component (Model Name) has a dimension or geometry that contributes (Contributor Name) to the total length of the stackup with a nominal value (Value) and upper and lower tolerance values (Upr/Zone and Lower). For these nominal and tolerance values a distribution of the total stackup length has been calculated using RSS and then plotted.
You can see from the plot that the designer is targeting the total stackup length to fall between 38.32 and 40.32. But, an RSS calculation shows for the specified nominal and tolerance values that a high number of the assemblies will exceed these targets. In fact you can see that the results of the RSS study estimate that only 90.4% of the assemblies will be within target, which is almost a 10% failure rate and in this case considered unacceptable. The designer will want to iterate on both the nominal and tolerance values for one or more components and rerun the RSS study until an acceptable failure rate is achieved.
To accelerate the designer’s ability to iterate and make design decisions, a software tool like Concept uses the assembly’s dimensions and geometries to automatically calculate how much each component contributes to the stackup’s total variation. You can see in the report below that 46.5%, or almost half, of the stackup variation is from the Assy_brake_piston component. As a designer, you’ll want to focus your iterations on that component first.
In the above report, you can see values identified as Cp, Cpk, and PPM. These are statistical calculation parameters that together enable greater insights into a tolerance analysis and the resulting GD&T decision making. For more information and a detailed overview of an example, please see our resource page: What are Cp, Cpk, and PPM in Tolerance Analysis?
Monte Carlo simulation stack-up calculations
As your assemblies become more complex, RSS methods can easily lose needed accuracy for estimating stackup failure rates, which can lead to costly manufacturing problems, or even become impractical. For many of these situations, Monte Carlo simulation is a very useful calculation method.
The roots of Monte Carlo simulation go back to World War II, when mathematicians needed a technique to estimate the probability of an uncertain event taking place. They created a method that used a series of random numbers as inputs into a model of the system of interest. By collecting all the outputs and analyzing, they were able to make probability estimates of one or more events occurring. Because of the similarities of this method to gambling games, one of the mathematicians reportedly thought of his uncle who played often at the famous Monte Carlo Casino in Monaco — and apparently regularly did not beat the odds of the casino! Thus the name Monte Carlo simulation.
Mechanism situations well suited to Monte Carlo simulation include:
- Complex shapes involving non-linear surfaces, like some types of cams, or non-linear forces, like some types of springs.
- Components with non-normal distributions, also known as non-Gaussian, of their dimensional variations. For instance a part’s variations might not be distributed as a normalized bell curve around a mean value — it might be skewed in a Weibull distribution.
- Changing points of contact within a mechanism, which introduce discontinuities in a stack-up calculation as the components move.
While Monte Carlo simulation is a more accurate method than RSS, and for some complex mechanisms the only practical way to analyze tolerances, keep in mind that it can be slower. More times than not Enventive finds that its users can get the accuracy they need with RSS.
Monte Carlo simulation uses random numbers based on a statistical distribution to represent the geometric and dimensional variation of individual components. A number of trials are run with each trial assigning a variation in one or more components, while keeping other variables constant. The combined results of these trials provide a probability estimate that the assembly will fail to meet requirements.
Examples of statistical distributions available in a tolerance analysis software tool like Enventive Concept include:
- Custom Gaussian distribution
- Uniform
- Triangular
- Weibull
- 2D Gaussian (for concentric circles and position constraints only)
- 2D Circular Uniform (for concentric circles and position constraints only)
- 2D Circular Gaussian (for concentric circles and position constraints only)
- 2D Pin in Hole (for pin-in-hole constraints only)
- Truncated Gaussian
- Trapezoidal
Below is a sample Monte Carlo simulation report from Enventive Concept’s tolerance analysis software. In this example 20,000 separate calculation trials were run. The histogram shows a frequency distribution of a stackup’s variation from nominal for those 20,000 runs. The targeted variation limits are +.55 and -.58. You can see a significant number of the runs exceed those targets. In fact the simulation shows that 95.5% are within tolerance, which is a 4.5% failure rate. Like with the RSS example above, the designer can iterate on GD&T design values and then rerun the simulation to achieve a desired failure rate.
Note that in the lower right you have a list of components with their geometries and dimensions that are contributors to the stackup’s variance. Each contributor has its own statistical distribution used in the simulation runs. As with the RSS example above, these contributors can be ranked by their percentage contribution to give designers a big head start on their GD&T iterations.
additional stack-up calculation methods
Two additional stack-up calculation methods to be aware of include Inertial Tolerancing and Process Tolerancing.
Inertial Tolerancing
The three common calculation methods above measure failure rates in a binary way — either the level of variation of each component in a stackup and for the stackup as a whole is within tolerance specifications and is good — or it exceeds tolerance specifications and it is a failure. Sometimes a sports metaphor is used to describe this. For example if a soccer ball is shot and goes into the goal it is good. It does not matter where in the goal it goes in. It is always worth one point! And if it missed, it is bad no matter how far it misses.
Inertial Tolerancing assumes that the amount of geometric or dimensional deviation from a target, or nominal value, matters, not just whether the deviation is in or out of specification. The impact on product quality versus the level of deviation is modeled as a continuum rather than as a binary “in” or “out” value. So for the sports metaphor, if the scoring soccer shot is right in the center of the goal it might be considered more valuable and less so the further away from center it is.
The math behind Inertial Tolerancing involves tolerancing the mean square deviation from the target rather than the distance. This method is considered to offer advantages for increasingly sophisticated product assemblies over the traditional, binary approach. It can lead to a further lowering of production costs for a targeted level of quality. A helpful technical reference on Inertial Tolerancing is Maurice Pillet’s paper “Inertial Tolerancing” published in The TQM Magazine.
Process Tolerancing
In the early 1960s the idea was proposed to separate tolerances into two parts — how far a variation is shifted from the mean value, or off-centering, and dispersions, or variational distribution — and then calculate each part separately. The off-centering is found with Worst-Case, and the combining of the dispersions within a stack uses RSS. From these two calculations then the resulting tolerances can be combined.
This approach, now known as Process Tolerancing, is aimed at the many products for which their components come from different production processes, with consistently different ratios between off-centering and standard deviation. These processes can include those for which tooling is not continuously adjustable, such as with plastic molding, fine blanking, stamping, or cold heading,
A very good technical reference on Process Tolerancing along with comparisons to the other methods in this article and proposed modifications to the method can be found in this paper from Jean-Marc Judic titled “Process Tolerancing: a new approach to better integrate the truth of the processes in tolerance analysis and synthesis.”