Identity Access Management

Enhancing the Usefulness of Performance Measures for an IAM Program

Enhancing the Usefulness of Performance Measures

Limitations associated with performance measurement and reporting do exist. However, even with the limitations, performance measurement and reporting can provide essential information to assist users in decision-making and assessing accountability.  Many critics of performance measures worry that measures will be viewed in a vacuum (i.e., without any other indicators of mission fulfillment or a description of environmental factors.) 

Numerous variables have an impact on the processes associated with a best practices program, many of which cannot be quantified or measured.  The following guidelines are designed to enable the policymakers to overcome the limitations of performance information and to enhance its usefulness.

  • Communicate the intended use of performance information and the potential difficulties associated with its use. State how the performance information will measure that goals and objectives are being achieved. Disclose possible difficulties in using the data where the measurement is related to service level or satisfaction. By clearly communicating the meaning of performance measures and their limitations to the audience, a program can maximize the intent of such measures and avoid conflicts concerning their perceived usefulness.

  • Enhancing the Usefulness of Performance Measures
  • The descriptive information can indicate that performance measures should be considered in conjunction with explanatory information to help understand the reasons for reported performance levels and the degree to which they may have affected the reported performance. Furthermore, when using surrogate measures, an ideal measure should be provided as well as an explanation for why the surrogate measure does not necessarily have a direct relationship. An explanation of how to interpret the surrogate measure is necessary, as are the reasons for choosing this measure.

  • Even with comparative information, there is not always a clear cause-and-effect relationship between the service provided and the resulting outcome. There may be numerous explanatory factors, completely or partially beyond the control of the Identity Access Management and Governance Best Practices program, that have a significant effect on results. These factors, when identified, should be reported in supplementary information along with their possible effects.  It is impossible to quantify the impact of every independent variable on the outcome of services.  However, staff should be aware of environmental factors that might have a significant impact on preventing or enhancing desired results.

  • Performance information, even with comparisons and supplementary information may not provide sufficient basis for program performance levels. Therefore, additional information gathered through program evaluations, performance audits, or segregating measures will help users understand the reasons for a given level of performance.

  • Segregation of reported performance information by geographical location within a region or branch office may be particularly helpful in enhancing users’ ability to understand performance.  Breaking down information in this manner may help in communication by providing information at a more meaningful level and by separating some of the variations that are often hidden by aggregated information.  By breaking down measures into smaller components, it is possible to see trends that would not be otherwise observed if only considered as part of the whole.

  • Performance information should only include those measures that are key to decision-making and accountability.  However, it is important to balance this need for conciseness with concern for completeness. If reported performance information is less than comprehensive, the user may be left with an incomplete understanding of the overall performance. Detailed reporting of performance information will also help to prevent the selective reporting of only those measures that provide a positive indication of results. At the same time, the inclusion of too many measures can confuse and overwhelm users.