Methodology

Applying new thinking to the board evaluation process

TBI’s methodology applies the latest research and practice in content validation, assessment, motivation and education, small-sample metrics and organizational change. Below are some more specific ways in which TBI’s method is different.

  • Forward-Looking – Traditional evaluations have the disadvantage that they are conducted after issues have arisen.  These measures tend to create feedback or essentially a report card about what has occurred with the principle purpose of reporting, not improving the future performance of the board.
  • A “Developmetric” Approach – The very process TBI employs to create our metrics drives board development.  TBI indexes apply a feed-forward approach that queries relevant content in a manner that is carefully designed to have the following benefits:
    • Education.   Each construct provides content that is relevant to board members, and questions get to the heart of what board members need to know.
    • Involvement.  TBI’s questions create interest and require board members to think about their roles as well as board governance, structure and processes.
    • Big Picture Perspective.   TBI’s questions broaden each director’s perspective, driving them to think about the entire board and how it operates, rather than focused on their specialty area.
    • Feed-forward.  Due to the construct of the questions, it enables board members to think forward about how the board operates now and how the board can improve on specific issues.

    • Knowledge Base.   TBI’s indexes provide extensive content knowledge that members may reference. Many board members even print a copy of their questions, so they can refer to them.
    • Sample Size – Most boards have less than 15 members, making it difficult to get reliable, quantitative results from a traditional evaluation (typically 50+ respondents are needed). With TBI, we conduct a series of rigorous statistical and process safeguards to assure the reliability, credibility, and validity of results. Read more about sample size
    • Robust Reporting – Our method provides boards with standard and proprietary data views and measures of item variance or reliability.  The report represents the consensus or means of the entire board for each question and across all the questions.  And because we maintain strict anonymity for our respondents, reports show variability on items but do not show which respondent(s) caused the variability. Read more about our Reports

 

  • Organizational Change – TBI’s indexes motivate, and more importantly, sustains organizational change, via repeated measures and recommendations for action. Repeated measures mean that a board that uses TBI at one time will expect to repeat the process, possibly a year later.  The expectation that the measures will be repeated creates accountability that motivates and sustains behavioral change.  Of course, some positive changes occur through board structure changes and those are easier to sustain. TBI reports that represent the consensus of the board across a large number of items.  These reports are designed to motivate organizational change on board governance in a positive direction.  The reports also include direct recommendations for actions that are sensitive to the board scores on each question.  These recommendations represent a decision support system or expert system designed to sustain positive organizational change.

 

  • Small Sample Methodology 

Traditional surveys gain their strength through sample size.  Inferential statistics such as means, standard deviations and other measures of variance assume sample sizes of 50 or greater.  Typical surveys may have several hundred respondents that are segmented into smaller groups for analysis.

TBI applies small-sample metrics because the sample size is usually less than 15.  Our extensive research conducted on reliability and validity for small-sample consensus decisions has revealed that when small samples are used, several psychometric issues need to be considered.

 

  • Scaling.  Narrow scales like 3 or 5 points yield almost no variability with small sample sizes.  Mean ratings tend to be 4.2 with plus or minus 0.4 variation.  Such measures provide almost no differentiation across survey constructs and absolutely no differentiation among various boards. Wider scales such as a 7-point do provide sufficient variability for differentiating across items and among multiple boards.  Extensive research has shown that 7-point scales are understandable to respondents and provide item reliabilities similar to 5-point scales.

 

  • Variance indicators.  Statistics such as standard deviation and variance need to be examined carefully with small sample sizes because the underlying distributional assumptions do not apply.  TBI uses several proprietary tools to examine item variation and variation among items.

 

  • Anonymity. It is imperative that each respondent has absolute confidence that their responses will be anonymous.  Failing anonymity, each respondent would predictably over-rate each item, skewing the results to the high end of the scale.  Such inflated ratings are meaningless and provide no value for analysis, motivation or education.  TBI uses a series of assurances to respondents regarding the confidentiality of responses.  Our commitment to respondent anonymity is absolute.  Research and legal actions associated with 360◦ feedback have demonstrated that commitments to respondents assuring their anonymous responses like those made in TBI are sustainable, even in legal actions.

 

  • Social demand bias.  Social demand bias occurs when people respond as they believe is socially expected.  All assessments need to be concerned about rater inflation from social demand but TBI Indexes are especially vulnerable because some respondents are predictably overly conscientious or defensive about the quality of their board.

Our research covering over 15 million respondents on small sample consensus decision processes indicate that social demand tends to inflate ratings on the order of 10 to 20%.  Fortunately, the skew is approximately equal across most boards, so the social demand effect is normalized.

Our research shows that most people are honest in their responses if they know their responses are anonymous. This is the same phenomenon that occurs in other democratic processes such as public voting and juries.

 

  • Safeguards. Additional safeguards are used to assure the metrics qualities are consistently high. In the unusual case where the small sample does not produce a reliable result, TBI reports flag the items and indicate the degree of unreliability.

 

  • Reports. TBI reports provide a great deal of data regarding the consensus of each board.  A careful examination of TBI reports shows a wide variety of data views to assure the results are both understandable and reliable. These combined actions work together to maximize reliability, credibility, and validity of BI reports.

 

Reporting

Actionable Results to Improve Your Board.

TBI reports enable boards to make better decisions and take action to improve their performance.

Our detailed analysis and composite scores highlight the areas that your board should focus on now and in the future. Reports provide a clear indication of where opportunities for your board to succeed lie – from the perspective of its structure, policies, and practices. Further, over time you can track your progress both internally and against external board benchmarks. TBI’s reports are objective, independent, informative, comprehensive and secure.

Every TBI report is available, at the board’s discretion, both online and via hard copy, and includes:

  • A summary and explanation of the overall score, and scores for each section.
  • The five highest scored questions
  • The five lowest scored questions
  • The five questions with the widest variation in responses.
  • For each question: the mean response, the range of responses, and the standard deviation of the responses.
  • Detailed Analysis of each question, user comments, and best practices for improvement.
  • Anonymous Responses to Comment Questions – capturing any critical areas that may not have been covered in the targeted questions and to provide additional feedback for your discussions.

The best way to think of the report is as a reference point in your board’s ongoing process of self-improvement and a critical risk management aid. The basic question that is being analyzed is “are we effective and adding value to our company?” Thus, the report provides an analysis of key board and committee metrics as well as what the directors themselves (along with invited respondents) believe the board could improve upon. The results provide a roadmap for discussion.

Free demo