Analyzing Attribute Agreement Analysis

The audit should help to identify the specific people and codes that are the main sources of problems and the evaluation of the attribute agreement should help to determine the relative contribution of reproducibility and reproducibility problems for those specific codes (and individuals). In addition, many bug databases have problems with precision records that indicate where an error was created because the place where the error is detected is recorded and not where the error was created. When the error is detected, there is not much to identify the causes, therefore the accuracy of the site assignment should also be an element of the audit. An attribute agreement analysis allows the impact of repeatability and reproducibility on accuracy to be assessed simultaneously. It allows the analyst to study the responses of multiple auditors, while examining multiple scenarios. It compiles statistics that assess the ability of evaluators to agree with themselves (repeatability), with each other (reproducibility) and with a well-known control or accuracy value (overall precision) for each characteristic – again and again. Since implementing an attribute analysis can be time-saving, expensive, and usually uncomfortable for all parties involved (the analysis is simple compared to execution), it`s best to take a moment to really understand what needs to be done and why. However, a bug tracking system is not a continuous nutrient. The assigned values are correct or not. there is no (or there is no) grey area. If codes, locations, and severity levels are set efficiently, there is only one correct attribute for each of these categories for a specific error. As with any measurement system, the accuracy and precision of the database must be understood before the information is used (or at least used during use) to make decisions.

At first glance, it would seem that the apparent starting point is an attribute analysis (or the measurement of R&R attributes). But it may not be such a good idea. I put all the default results and evaluation on Minitab and run the attribute agreement analysis. Then I saw that the chords in “Within Appraisers” and “Appraiser vs Standard” were about 60%. Some Kappa values were less than 0.6. The result was quite bad. The accuracy of a measurement system is analyzed by subdividing it into two essential components: repeatability (the ability of a particular evaluator to assign the same value or attribute several times under the same conditions) and reproducibility (the ability of several evaluators to agree among themselves for a number of circumstances). In the case of an attribute measurement system, repeatability or reproducibility problems inevitably cause accuracy problems.

In addition, if one knows the overall accuracy, repeatability and reproducibility, distortions can be detected even in situations where decisions are systematically wrong. Repeatability and reproducibility are elements of precision in an analysis of the attribute measurement system, and it is advisable to first determine whether or not there is a precision problem. This means that before designing an analysis of the attribute agreement and choosing the appropriate scenarios, it is essential that an analyst consider a database check to determine whether past events have been correctly coded or not. First, the analyst should firmly determine that there is indeed attribute data. . . .

This entry was posted in Uncategorized. Bookmark the permalink.