Skip Navigation
small NCES header image
Managing an Identity Crisis: Forum Guide to Implementing New Federal Race and Ethnicity Categories
NFES 2008-802
October 2008

Appendix C: OMB Bridging Methodologies

Recognizing the need to address the incomparability between race data collected under the 1977 and 1997 standards, OMB published a study in 2000 presenting findings on a set of bridging methods. Table 1 below presents the four major categories of bridging techniques studied by OMB and the nine specific methodologies that fall under them (in bold):

Table 1.  Bridging methodologies outlined by OMB, by category

   Whole Assignment Fractional Assignment All Inclusive
Deterministic (1a) Smallest Group
(1b) Largest Group
  other than White

(1c)  Largest Group
(1d) Plurality
(3a) Equal Fractions
(3b) NHIS Fractions
Probabilistic (2a) Equal Selection
(2b) NHIS Fractions
*
All Inclusive (4) All Inclusive

† Not applicable.
**OMB did not consider Probabilistic Fractional Assignment methods because they were deemed to be unnecessarily complex and did not improve upon the other methods.  In addition, Renn and Lunceford warn, "Attempting to estimate how often an individual might identify in different groups is a messy and political business." (Renn and Luceford (2002), p. 13.)
SOURCE: Jackson (2002) and OMB (2000).

OMB assessed these nine methodologies along nine criteria.  Briefly, these nine assessment criteria are:

  1. Measure change over time.  How well does the methodology recreate the population distribution under the 1977 standards?  How accurately does it assign an individual's response to the 1977 category that would have been chosen had those standards been in effect?  This is said to be the most important criterion by OMB. 
  2. Congruence with respondent's choice.  How well is the full range of a respondent's choices represented in the racial distribution?  Are some of the multiple-race respondents' responses disregarded because of the methodology or are all responses reflected in the data?
  3. Range of applicability.  How well can the methodology be applied to different contexts (e.g., populations of varying racial distributions and sizes)?
  4. Meet confidentiality and reliability standards.  OMB found that none of the methodologies introduce new confidentiality problems, but reliability may differ among the methodologies.  How reliable is the bridging estimate created under this technique?
  5. Minimize disruptions to single race distributions.  How does the methodology affect the single-race distributions?  Are the bridged single-race distributions similar to those collected under the 1997 standards?
  6. Statistically defensible.  Does the methodology conform to acceptable statistical conventions?  Are assumptions being made about how respondents would answer under 1977 standards or about the relative importance of a given race?
  7. Ease of use.  How complicated is it to produce bridge results with the methodology?  Can the method be implemented with little operational difficulty?
  8. Skill required.  What skills are needed to create bridge data under the methodology?  Can someone with relatively little statistical knowledge implement the methodology?
  9. Understandability and communicability.  How easily can the methodology be explained to and understood by the average user?

 

Below, we present the nine OMB methodologies.  Provided along with simple definitions are basic practical descriptions of how the methods produce estimates, as well as brief discussions of the strengths and weaknesses of each as discussed by OMB and other bridging researchers along with additional notes to help states in their consideration of these methods.

1. Deterministic Whole Assignment – Assignment into a single category based on a predetermined rule.
a. Smallest Group – This rule assigns multiple-race responses that include White and another racial group to the other group.  This action is based on the assumption that White is the largest group, although this is not always the case at the local level.  Responses including two or more racial groups other than White are assigned to the group with the lowest single race count in the collection.

  • White/Other race— Other race (misclassifies all who would have chosen White)
  • Other race/Smaller other race — Smaller other race in the collection (misclassifies all who would have chosen larger other race)

OMB accorded this methodology was one of its least favorable reviews.  In general, bridging has little effect on the largest race groups in a population because the number of multiple-race respondents is usually quite small compared to the sizes of those groups.  Therefore, the addition of the few multiple-race people to large groups has a minimal effect.  Conversely, race-bridging tends to have a greater impact on smaller race groups such as American Indian or Alaska Native (AIAN) and Asian or Other Pacific Islander (API).  While smaller race groups are most sensitive to bridging in general, they are especially affected by this assignment methodology, which tends to exaggerate the size of minority race groups.  The smaller the group, the larger the distortion will be. 

A state in which White is not the predominant racial group (e.g., Hawaii), may find this method to be inappropriate for use with its population, since it would cause White/Asian multirace respondents to be assigned to the latter group even though White is the smaller of the two groups in the state.  On the other hand, states with substantial numbers of AIAN/Other race multirace individuals may consider this methodology as a way of avoiding underestimation of their AIAN population in light of the tendency of multirace individuals having AIAN as a component race to choose AIAN as their primary race much more often than do multirace individuals of other component races.8, 9, 10

b. Largest Group other than White – This rule also allocates responses that include White and another racial group to the other group.  Responses including two or more racial groups other than White are assigned to the group with the highest single race count.

  • White/Other race — Other race (misclassifies all who would have chosen White)
  • Other race/Larger other race— Larger other race (misclassifies all who would have chosen smaller Other race)

Along with method 1a, this methodology received one of the least favorable reviews from OMB among the methodologies that were reviewed.  While smaller race groups are most sensitive to bridging in general, they are especially affected by this assignment methodology.  On the one hand, it tends to overestimate larger minority groups.  When respondents choose White and another race, for example, this method may cause the aggregate size of that other race population to be exaggerated, since some of those respondents would have chosen White if they had to select only one race.  On the other hand, it will tend to underestimate the size of smaller minority race groups—the smaller the group, the larger the distortion. 

c. Largest Group – This rule assigns responses including two or more racial groups to the group with the highest single race count.  In this OMB method, any individual with a multirace combination including White is allocated to the White category.  This action is based on the assumption that White is the largest group, although this is not always the case at the local level.  Combinations that do not include White are assigned to the group with the highest single race count. 

  • White/Other race — White (misclassifies those who would have chosen Other race)
  • Other race/Larger other race — Larger other in collection (misclassifies all who would have chosen smaller Other race)

This methodology was one of the most favorably assessed by OMB.  It received a positive review in terms of the ease with which it can be used as well as its ability to produce high quality estimates on average.  However, this technique may underestimate smaller groups by misclassifying all multirace individuals who would have selected their non-White or smaller component race under the 1977 standards.  Additionally, at the local level, this simplistic methodology may produce poor estimates as it may not reflect local preferences.  It will likely diminish the size of small minority groups if multirace individuals tend to identify with those groups more often than with the larger groups.  This method tends to produce the best estimates for the White and Black groups, but poorer estimates for the smaller race groups.

d. Plurality – In this method, all responses in a multiple-race category are assigned to the race group with the highest proportion of primary race responses on the National Health Interview Survey (NHIS), with "primary race" being the one race with which respondents most identify or that their community most commonly recognizes them as.11 For instance, all White/Black multirace responses would be bridged to the race with the most primary responses among White/Black individuals in the NHIS.

  • Smaller NHIS primary race/Larger NHIS primary race — Larger NHIS primary race (misclassifies all who would have chosen smaller NHIS primary race)

This methodology, along with methods 1c and 2a, received one of the most favorable assessments from OMB among the methodologies it evaluated.  NHIS-based methodologies are limited by the survey's inclusion of only the major multirace combinations and racial combinations that include only two component races.  For that small number of individuals who identify as a rare racial combination or as more than two races, therefore, NHIS-based probabilities are not available. To deal with this limitation, states may devise some method of simplifying these combinations down to only two components, perhaps using only the two largest or smallest groups identified.
 
2.  Probabilistic Whole Assignment – Assignment into single group using probabilities.
a. Equal Selection – This method assigns each of the multiple responses in equal fractions back to only one of the previous racial categories identified. The fractions specify the probabilities used to select a particular category (in this case they are equal selection probabilities). In practice, for example, half of White/Black respondents would be assigned to White, and the other half to Black.

  • Race 1/Race 2 — All such individuals are randomly assigned using 50/50 probability.  In practice, when bridging at the aggregate level, multirace responses are divided evenly among the component races.

Along with methods 1c and 1d, this methodology is among the most positively assessed of the OMB methodologies.  It received a positive review in terms of the relative ease with which it can be used as well as its ability to produce high quality estimates on average.  However, this method will distort the data to the degree that multirace individuals' preferences differ from equal probabilities and is particularly problematic in its allocation of AIAN populations.

b. NHIS Fractions – This alternative assigns multiple race responses to single race categories based on the proportions of multirace respondents' choices of primary race on the NHIS.  In practice, a percentage of White/Black respondents are assigned to White based on the NHIS results, and the remaining percentage to Black.  Equal fractions are used where no information is available from NHIS.

  • Race 1/Race 2 — Random assignment of individual to either group based on NHIS primary race proportions.  Equal fractions used where NHIS data are not available.

This methodology may produce a high-quality estimate because it is based on a national sample's preferences of primary race.  NHIS-based methodologies are limited by the survey's inclusion of only the major multirace combinations and racial combinations that include only two component races.  For that small number of individuals who identify as a rare racial combination or as more than two races, therefore, NHIS-based probabilities are not available. To deal with this latter limitation, states may devise some method of simplifying these combinations down to only two components, perhaps only the two largest or smallest groups identified. 

3. Deterministic Fractional Assignment – Assignment into multiple groups using probabilities.
a. Equal Fractions – This method assigns each of a respondent's multiple responses in equal fractions to each racial group identified. In effect, each multirace respondent is fractionally assigned to multiple race categories in equal parts.  These fractions must sum to one.

  • Race 1/Race 2 — Individual response split equally among races (i.e. ½ to Race 1, ½ to Race 2)

This method, while receiving a positive assessment from OMB for its ability to produce high-quality estimates on average, will distort the data to the degree that multirace individuals' preferences differ from equal probabilities.  In addition, this methodology complicates data storage because it requires multiple race categories to be marked with a fractional value for each multirace individual.  Therefore, this methodology may be better suited for bridging at the individual level, while methodologies 2a and 2b may be more appropriate for bridging at the aggregate level.

b. NHIS Fractions – This alternative also assigns multiple race responses in fractions to each racial group identified based on fractions drawn from the results of the NHIS.  These fractions must sum to one.  For example, a Black/White respondent may be assigned 2/3 White and 1/3 Black based on NHIS primary race proportions. 

  • Race 1/Race 2 — Fraction of individual to Race 1, another fraction to Race 2 based on NHIS primary race proportions.

Like the previous technique, this methodology complicates data storage because it requires multiple race categories to be marked with a fractional value for each multirace individual.  This methodology may be better suited for bridging at the individual level, while methodologies 2a and 2b may be more appropriate for bridging at the aggregate level.  See notes on methodology 1d for additional considerations about using NHIS data.

4.  All Inclusive Assignment – All race choices are counted as whole responses.
In this alternative, each of a multirace respondent's race responses are counted as one full response, with the respondents being assigned to every racial category they select. 

  • Race 1/Race 2 — One whole response to each (race totals exceed 100 percent).

Essentially, responses are counted rather than bodies and one person can appear as multiple bodies in the data unless the data system is designed to treat the data otherwise or "raking" is performed.  As a result, in a population of 100 with 5 people reporting two races, the total race count for the population will be 105.  And, as follows, the sum of all the racial categories, which includes both single and multiple race reporting, will exceed 100 percent, a fact that may exclude this method from states' consideration.

Top


7Renn and Lunceford, 2002, p. 13
8 National Health Interview Survey.
9 Jackson (2002).
10 Ingram (2003).
11 For these methodologies, the OMB study used the NHIS, a national survey that collects data on about 100,000 people each year.  Since 1997, the NHIS has included an additional question asking multiracial respondents which single-race category best describes them  (i.e. their "primary race.").  These response data, which are available down to the county level, could be used by agencies to ascertain proportions for use in the whole (1d, 2b) or fractional (3b) assignment of multirace respondents.  Basically, by utilizing national data collected from multirace individuals about their preferences, they can allow for potentially more accurate approximation of how state and local multirace populations are likely to identify themselves under a single-race data collection system.  To access these NHIS primary race probabilities, visit the NCHS's Research Data Center. See table 6 in the series report for probabilities based on the 1997–2000 NHIS.


Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.