Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring
The aim of this study was to assess the validity and reproducibility of digital scoring of the Peer Assessment Rating (PAR) index and its components using a software, compared with conventional manual scoring on printed model equivalents. The PAR index was scored on 15 cases at pre- and post-treatme...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-04-01
|
Series: | Journal of Clinical Medicine |
Subjects: | |
Online Access: | https://www.mdpi.com/2077-0383/10/8/1646 |
_version_ | 1797537958226558976 |
---|---|
author | Arwa Gera Shadi Gera Michel Dalstra Paolo M. Cattaneo Marie A. Cornelis |
author_facet | Arwa Gera Shadi Gera Michel Dalstra Paolo M. Cattaneo Marie A. Cornelis |
author_sort | Arwa Gera |
collection | DOAJ |
description | The aim of this study was to assess the validity and reproducibility of digital scoring of the Peer Assessment Rating (PAR) index and its components using a software, compared with conventional manual scoring on printed model equivalents. The PAR index was scored on 15 cases at pre- and post-treatment stages by two operators using two methods: first, digitally, on direct digital models using Ortho Analyzer software; and second, manually, on printed model equivalents using a digital caliper. All measurements were repeated at a one-week interval. Paired sample <i>t</i>-tests were used to compare PAR scores and its components between both methods and raters. Intra-class correlation coefficients (ICC) were used to compute intra- and inter-rater reproducibility. The error of the method was calculated. The agreement between both methods was analyzed using Bland-Altman plots. There were no significant differences in the mean PAR scores between both methods and both raters. ICC for intra- and inter-rater reproducibility was excellent (≥0.95). All error-of-the-method values were smaller than the associated minimum standard deviation. Bland-Altman plots confirmed the validity of the measurements. PAR scoring on digital models showed excellent validity and reproducibility compared with manual scoring on printed model equivalents by means of a digital caliper. |
first_indexed | 2024-03-10T12:23:38Z |
format | Article |
id | doaj.art-e53e0af0385a4de3939467d45b20ac7b |
institution | Directory Open Access Journal |
issn | 2077-0383 |
language | English |
last_indexed | 2024-03-10T12:23:38Z |
publishDate | 2021-04-01 |
publisher | MDPI AG |
record_format | Article |
series | Journal of Clinical Medicine |
spelling | doaj.art-e53e0af0385a4de3939467d45b20ac7b2023-11-21T15:18:21ZengMDPI AGJournal of Clinical Medicine2077-03832021-04-01108164610.3390/jcm10081646Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual ScoringArwa Gera0Shadi Gera1Michel Dalstra2Paolo M. Cattaneo3Marie A. Cornelis4Section of Orthodontics, Department of Dentistry and Oral Health, Aarhus University, C 8000 Aarhus, DenmarkSection of Orthodontics, Department of Dentistry and Oral Health, Aarhus University, C 8000 Aarhus, DenmarkSection of Orthodontics, Department of Dentistry and Oral Health, Aarhus University, C 8000 Aarhus, DenmarkFaculty of Medicine, Dentistry and Health Sciences, Melbourne Dental School, University of Melbourne, Carlton, VIC 3053, AustraliaFaculty of Medicine, Dentistry and Health Sciences, Melbourne Dental School, University of Melbourne, Carlton, VIC 3053, AustraliaThe aim of this study was to assess the validity and reproducibility of digital scoring of the Peer Assessment Rating (PAR) index and its components using a software, compared with conventional manual scoring on printed model equivalents. The PAR index was scored on 15 cases at pre- and post-treatment stages by two operators using two methods: first, digitally, on direct digital models using Ortho Analyzer software; and second, manually, on printed model equivalents using a digital caliper. All measurements were repeated at a one-week interval. Paired sample <i>t</i>-tests were used to compare PAR scores and its components between both methods and raters. Intra-class correlation coefficients (ICC) were used to compute intra- and inter-rater reproducibility. The error of the method was calculated. The agreement between both methods was analyzed using Bland-Altman plots. There were no significant differences in the mean PAR scores between both methods and both raters. ICC for intra- and inter-rater reproducibility was excellent (≥0.95). All error-of-the-method values were smaller than the associated minimum standard deviation. Bland-Altman plots confirmed the validity of the measurements. PAR scoring on digital models showed excellent validity and reproducibility compared with manual scoring on printed model equivalents by means of a digital caliper.https://www.mdpi.com/2077-0383/10/8/1646orthodonticsCAD/CAMPAR indexdental modelsdigital modelsclinical |
spellingShingle | Arwa Gera Shadi Gera Michel Dalstra Paolo M. Cattaneo Marie A. Cornelis Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring Journal of Clinical Medicine orthodontics CAD/CAM PAR index dental models digital models clinical |
title | Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring |
title_full | Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring |
title_fullStr | Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring |
title_full_unstemmed | Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring |
title_short | Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring |
title_sort | validity and reproducibility of the peer assessment rating index scored on digital models using a software compared with traditional manual scoring |
topic | orthodontics CAD/CAM PAR index dental models digital models clinical |
url | https://www.mdpi.com/2077-0383/10/8/1646 |
work_keys_str_mv | AT arwagera validityandreproducibilityofthepeerassessmentratingindexscoredondigitalmodelsusingasoftwarecomparedwithtraditionalmanualscoring AT shadigera validityandreproducibilityofthepeerassessmentratingindexscoredondigitalmodelsusingasoftwarecomparedwithtraditionalmanualscoring AT micheldalstra validityandreproducibilityofthepeerassessmentratingindexscoredondigitalmodelsusingasoftwarecomparedwithtraditionalmanualscoring AT paolomcattaneo validityandreproducibilityofthepeerassessmentratingindexscoredondigitalmodelsusingasoftwarecomparedwithtraditionalmanualscoring AT marieacornelis validityandreproducibilityofthepeerassessmentratingindexscoredondigitalmodelsusingasoftwarecomparedwithtraditionalmanualscoring |