module to use. This is discussed above in the situation where the modules could be identified through the choice of one or several parameters, and distributions for the “correctness” or plausibility of these parameters could be elicited. However, there are modeling situations where there is not an obvious sample space of alternatives, and even should there be a well-defined sample space, it may be extremely difficult to attribute subjective probabilities of correctness to these various alternatives. In these situations, it is common to use sensitivity analyses—running various alternative leading cases for the module—to develop some understanding of the variability resulting from the use of alternative modules. Clearly, this process can result in an underestimate of the variability, since not all possible alternatives may be included in the analysis, and this process can also result in an overestimate of the variability, since the module used in the microsimulation model might be much closer to the truth than the alternatives. The hope is that if the various alternatives used are of similar plausibility (or nearly so, given the current state of knowledge), the resulting range of the output estimates will provide some information as to the uncertainty in the output that can be attributed to misspecification of that component.
In addition to using sensitivity analysis for assessing variability due to model misspecification, one could also use sensitivity analysis alone or together with the bootstrap to estimate uncertainty due to sampling variability of inputs from data sources, such as control totals, and the bias from errors or untimeliness in the primary database and secondary sources, such as undercoverage of the target population or misreporting of key variables. For example, to assess uncertainty due to undercoverage, one would need to reweight the primary input data set to mimic the type of reweighting that might result from undercoverage. This could be accomplished in several ways to create several artificial data sets for input into the microsimulation model. If one were also interested in assessing variance due to sampling in the input data set, one could create a family of artificial data sets for each bootstrap replication.
It is not clear how large one needs K to be to apply bootstrapping to microsimulation models. There are examples in the literature for which as few as 10 replications have been profitably used (see, e.g., Diaconis and Efron, 1983); this might be all one could hope to compute with today’s models in their current computer environments, ignoring such possibilities as embedding a statistical match within the bootstrap process. When one is interested in variance estimation—or what amounts to roughly the same thing, 67 percent confidence intervals—Tibshirani (1985) and others indicate that a K of 50 is sufficient, and possibly one could compute reasonable estimates for even smaller values. This is currently difficult for some microsimulation models, but it is quite feasible for others that have been developed recently (see, e.g., Wolfson and Rowe, 1990).
Sign in to access your saved publications, downloads, and email preferences.
Former MyNAP users: You'll need to reset your password on your first login to MyAcademies. Click "Forgot password" below to receive a reset link via email. Having trouble? Visit our FAQ page to contact support.
Members of the National Academy of Sciences, National Academy of Engineering, or National Academy of Medicine should log in through their respective Academy portals.
While logged on as a guest, you can download any of our free PDFs on nationalacademies.org . You will remain logged in until you close your browser.
Thank you for creating a MyAcademies account!
Enjoy free access to thousands of National Academies' publications, a 10% discount off every purchase, and build your personal library.
Enter the email address for your MyAcademies (formerly MyNAP) account to receive password reset instructions.
We sent password reset instructions to your email . Follow the link in that email to create a new password. Didn't receive it? Check your spam folder or contact us for assistance.
Your password has been reset.
Verify Your Email Address
We sent a verification link to your email. Please check your inbox (and spam folder) and follow the link to verify your email address. If you did not receive the email, you can request a new verification link below