They typically include static “aging” routines to bring their databases up to date or project them into the future. Such routines reweight the individual records to match outside control totals for key demographic characteristics and make other adjustments for changes in income and employment.
Dynamic models operate on longitudinal databases that contain individual histories. They “grow” their databases forward in time by applying transition probabilities to each record for such events as birth, death, marriage, labor force status change, and so on. Within these two distinct model types, there are variations in handling common functions that result from such factors as differences in client needs and in styles of the model developers.
In Chapter 3 Citro and Ross describe the different approaches taken by three static models—TRIM2, MATH, and HITSM (see below)—to two important functions of models that simulate income support programs such as AFDC and food stamps: the routines to simulate the participation decision and the routines to convert annual to monthly values. In Chapter 4 Ross compares and contrasts two major dynamic models—DYNASIM2 and PRISM (see below)—and reflects generally on the dynamic modeling approach.
Given their complexity and size, microsimulation models are very dependent on computer hardware and software capabilities to operate in a cost-effective manner. Most models that are in widespread use today are designed for mainframe, batch-oriented processing that minimizes the cost of single computer runs but imposes barriers to access and inhibits flexible, timely adaptation to meet new policy needs. In Chapter 5 Cotton and Sadowsky compare and contrast the mainframe computing environment for the TRIM2 model with the personal computer-based environment for the model developed by Statistics Canada, SPSD/M (see below). Cotton and Sadowsky assess likely future directions for computer hardware and software that offer potential benefits for improved microsimulation model capabilities.
Assessment of the quality of outputs from models is a vitally important component of the process of using model estimates in the policy debate and of determining fruitful directions for investment in improved model capabilities. However, for a variety of reasons, validation of microsimulation models has been a largely neglected activity. In Chapter 6 Cohen discusses the potential for using relatively new, computer-intensive sample reuse techniques for developing variance estimates for the outputs of microsimulation models. In Chapter 7 Cohen reviews the scanty literature of previous microsimulation model validation studies.
Sign in to access your saved publications, downloads, and email preferences.
Former MyNAP users: You'll need to reset your password on your first login to MyAcademies. Click "Forgot password" below to receive a reset link via email. Having trouble? Visit our FAQ page to contact support.
Members of the National Academy of Sciences, National Academy of Engineering, or National Academy of Medicine should log in through their respective Academy portals.
While logged on as a guest, you can download any of our free PDFs on nationalacademies.org . You will remain logged in until you close your browser.
Thank you for creating a MyAcademies account!
Enjoy free access to thousands of National Academies' publications, a 10% discount off every purchase, and build your personal library.
Enter the email address for your MyAcademies (formerly MyNAP) account to receive password reset instructions.
We sent password reset instructions to your email . Follow the link in that email to create a new password. Didn't receive it? Check your spam folder or contact us for assistance.
Your password has been reset.
Verify Your Email Address
We sent a verification link to your email. Please check your inbox (and spam folder) and follow the link to verify your email address. If you did not receive the email, you can request a new verification link below