Many research projects require an evaluation of IRRs to show the extent of the interim agreement between coders. The corresponding IRR statistics must be carefully selected by researchers to ensure that their statistics are related to the design and purpose of their study and that the statistics used are appropriate on the basis of deontable evaluations. Researchers should use validated IRR statistics when evaluating ERREURS instead of using percentages of the agreement or other indicators that do not take into account random agreement or provide statistical performance information. In-depth analysis and communication of the results of the irrpropriation analyses will provide more clear results from the research community. The intra-class correlation coefficient was calculated to assess the agreement between three physicians to assess the anxiety levels of 20 people. There was absolute misreprescing between the three doctors, with the two-way random effect models and the «single-rater» unit, kappa – 0.2, p – 0.056. For two-way mixed-effect models, there are two ICC definitions: «absolute consistency» and «coherence.» The choice of the CCI definition depends on whether we feel that absolute consistency between advisors is more important. The choice of the correct ICC form for the interraterian reliability study can be guided by 4 questions: (1) Do we have the same sentence of spleen for all subjects? (2) Do we have a random sample of advisors from a larger population or a sample of advisors? (3) Are we interested in the reliability of individual seats or the average value of several appraisers? (4) Are we concerned about consistency or agreement? The first 2 questions lead to the «model» selection, question 3 guides the «type» selection, and the last question is led by the «Definition» selection. Cicchetti (1994) outlines the following, often cited, guidelines for interpreting Kappa or LCC interrater measures: The Intraclass Correlation Coefficientlation (ICC) can be used to measure the strength of the Inter-Rater agreement in a situation where the evaluation scale is continuous or orderly.
It is suitable for studies with two or more tips. Note that CCI can also be used for test reliability analysis (repeated measurements of the same subject) and intra-rater (several points from the same tips). The ER assessment quantifies the degree of agreement between two or more coders who conduct independent assessments of the characteristics of a number of subjects.