Small Sample Methodology

Traditional surveys gain their strength through sample size. Inferential statistics such as means, standard deviations and other measures of variance assume sample sizes of 50 or greater. Typical surveys may have several hundred respondents that are segmented into smaller groups for analysis.

TBI applies small-sample metrics because the sample size is usually less than 15. Our extensive research conducted on reliability and validity for small-sample consensus decisions has revealed that when small samples are used, several psychometric issues need to be considered.

  • Scaling. Narrow scales like 3 or 5 points yield almost no variability with small sample sizes. Mean ratings tend to be 4.2 with plus or minus 0.4 variation. Such measures provide almost no differentiation across survey constructs and absolutely no differentiation among various boards. Wider scales such as a 7-point do provide sufficient variability for differentiating across items and among multiple boards. Extensive research has shown that 7-point scales are understandable to respondents and provide item reliabilities similar to 5-point scales.
  • Variance indicators. Statistics such as standard deviation and variance need to be examined carefully with small sample sizes because the underlying distributional assumptions do not apply. TBI uses several proprietary tools to examine item variation and variation among items.
  • Anonymity. It is imperative that each respondent has absolute confidence that their responses will be anonymous. Failing anonymity, each respondent would predictably over-rate each item, skewing the results to the high end of the scale. Such inflated ratings are meaningless and provide no value for analysis, motivation or education.
    TBI uses a series of assurances to respondents regarding the confidentiality of responses. Our commitment to respondent anonymity is absolute. Research and legal actions associated with 360◦ feedback have demonstrated that commitments to respondents assuring their anonymous responses like those made in TBI are sustainable, even in legal actions.
  • Social demand bias. Social demand bias occurs when people respond as they believe is socially expected. All assessments need to be concerned about rater inflation from social demand but TBI Indexes are especially vulnerable because some respondents are predictably overly conscientious or defensive about the quality of their board.

Our research covering over 15 million respondents on small sample consensus decision processes indicate that social demand tends to inflate ratings on the order of 10 to 20%. Fortunately, the skew is approximately equal across most boards so the social demand effect is normalized.

Our research shows that most people are honest in their responses if they know their responses are anonymous. This is the same phenomenon that occurs in other democratic processes such as public voting and juries.
  • Safeguards. Additional safeguards are used to assure the metrics qualities are consistently high. In the unusual case where the small sample does not produce a reliable result, TBI reports flag the items and indicate the degree of unreliability.
  • Reports. TBI reports provide a great deal of data regarding the consensus of each board. A careful examination of TBI reports shows a wide variety of data views in order to assure the results are both understandable and reliable. These combined actions work together to maximize reliability, credibility, and validity of TBI reports.
Free demo