Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphia
Introduction Assessment of spelling deficits in aphasia typically follows the convention that responses are scored as either correct or incorrect, with some coding of error type. In some instances, the response may be quite close to the target (e.g., circiut for circuit), while in other cases the re...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2015-05-01
|
Series: | Frontiers in Psychology |
Subjects: | |
Online Access: | http://journal.frontiersin.org/Journal/10.3389/conf.fpsyg.2015.65.00049/full |
_version_ | 1818039101284155392 |
---|---|
author | Andrew Tesla Demarco |
author_facet | Andrew Tesla Demarco |
author_sort | Andrew Tesla Demarco |
collection | DOAJ |
description | Introduction
Assessment of spelling deficits in aphasia typically follows the convention that responses are scored as either correct or incorrect, with some coding of error type. In some instances, the response may be quite close to the target (e.g., circiut for circuit), while in other cases the response bears little resemblance (e.g., tricenn for circuit). Responses that resemble the target may have functional value in that they can be deciphered by a communication partner when used in context, they may evoke automatic self-corrections, or provide better input to support the use of an electronic spellchecker. Treatment for spelling impairments may result in closer approximations as well as an increase in the overall number of correct responses. Thus, analysis of response accuracy at a more fine-grained level is clearly desirable, but requires considerable time and decision-making to score consistently. To address this issue, we constructed a software tool to automatically score electronically transcribed responses on a by-letter basis.
Methods
To evaluate the potential differences between by-letter scoring and conventional whole-word scoring, we examined pre- and post-treatment written responses from 18 individuals diagnosed with global agraphia on the 80 real-word items from the Arizona Battery for Reading and Spelling. Responses were scored by conventional whole-word accuracy, on a by-letter basis where the order of letters in each response was required to match the target, and on a by-letter basis where responses were not penalized for incorrect letter order. The resulting accuracy scores were analyzed in a two-way repeated measures ANOVA that examined the effect of test time (pre- vs. post-treatment) and scoring method on estimated spelling accuracy.
Results
As expected, spelling scores were significantly better after treatment, regardless of scoring method (F(1,17) = 44.55, p < 0.001). As shown in the figure, there was a significant effect of scoring method (F(1.22, 20.66) = 156.72, p < 0.001), with by-letter fixed-order scoring yielding significantly higher scores (pre 26.8%, post 44.1%) than whole word scoring (pre 5.9%, post 18.6%), and by-letter free-order scoring yielding significantly higher scores still (pre 42.6%, post 59.3%). There was no significant interaction of test time relative to treatment and scoring method (F(1.14,19.29) = 2.32, p = 0.142). Although all patients gained at least an additional 5.8% (fixed order) or 14.1% (any order) relative to whole word scoring, some patients benefited up to 42.9% (fixed order) and up to 66.9% (any order) from by-letter scoring methods.
Discussion
These data suggest that conventional whole-word scoring may significantly underestimate functional spelling performance. Because by-letter scoring boosted pre-treatment scores to the same extent as post-treatment scores, the magnitude of treatment gains was no greater than estimates from conventional whole-word scoring. Nonetheless, the surprisingly large disparity between conventional whole-word scoring and by-letter scoring suggests that by-letter scoring methods may warrant further investigation. Because by-letter analyses may hold interest to others, we plan to make the software tool used in this study available on-line for use to researchers and clinicians at large. |
first_indexed | 2024-12-10T07:53:17Z |
format | Article |
id | doaj.art-fe69dd11571a4ff88073cbce0621d554 |
institution | Directory Open Access Journal |
issn | 1664-1078 |
language | English |
last_indexed | 2024-12-10T07:53:17Z |
publishDate | 2015-05-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Psychology |
spelling | doaj.art-fe69dd11571a4ff88073cbce0621d5542022-12-22T01:56:58ZengFrontiers Media S.A.Frontiers in Psychology1664-10782015-05-01610.3389/conf.fpsyg.2015.65.00049150452Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphiaAndrew Tesla Demarco0University of ArizonaIntroduction Assessment of spelling deficits in aphasia typically follows the convention that responses are scored as either correct or incorrect, with some coding of error type. In some instances, the response may be quite close to the target (e.g., circiut for circuit), while in other cases the response bears little resemblance (e.g., tricenn for circuit). Responses that resemble the target may have functional value in that they can be deciphered by a communication partner when used in context, they may evoke automatic self-corrections, or provide better input to support the use of an electronic spellchecker. Treatment for spelling impairments may result in closer approximations as well as an increase in the overall number of correct responses. Thus, analysis of response accuracy at a more fine-grained level is clearly desirable, but requires considerable time and decision-making to score consistently. To address this issue, we constructed a software tool to automatically score electronically transcribed responses on a by-letter basis. Methods To evaluate the potential differences between by-letter scoring and conventional whole-word scoring, we examined pre- and post-treatment written responses from 18 individuals diagnosed with global agraphia on the 80 real-word items from the Arizona Battery for Reading and Spelling. Responses were scored by conventional whole-word accuracy, on a by-letter basis where the order of letters in each response was required to match the target, and on a by-letter basis where responses were not penalized for incorrect letter order. The resulting accuracy scores were analyzed in a two-way repeated measures ANOVA that examined the effect of test time (pre- vs. post-treatment) and scoring method on estimated spelling accuracy. Results As expected, spelling scores were significantly better after treatment, regardless of scoring method (F(1,17) = 44.55, p < 0.001). As shown in the figure, there was a significant effect of scoring method (F(1.22, 20.66) = 156.72, p < 0.001), with by-letter fixed-order scoring yielding significantly higher scores (pre 26.8%, post 44.1%) than whole word scoring (pre 5.9%, post 18.6%), and by-letter free-order scoring yielding significantly higher scores still (pre 42.6%, post 59.3%). There was no significant interaction of test time relative to treatment and scoring method (F(1.14,19.29) = 2.32, p = 0.142). Although all patients gained at least an additional 5.8% (fixed order) or 14.1% (any order) relative to whole word scoring, some patients benefited up to 42.9% (fixed order) and up to 66.9% (any order) from by-letter scoring methods. Discussion These data suggest that conventional whole-word scoring may significantly underestimate functional spelling performance. Because by-letter scoring boosted pre-treatment scores to the same extent as post-treatment scores, the magnitude of treatment gains was no greater than estimates from conventional whole-word scoring. Nonetheless, the surprisingly large disparity between conventional whole-word scoring and by-letter scoring suggests that by-letter scoring methods may warrant further investigation. Because by-letter analyses may hold interest to others, we plan to make the software tool used in this study available on-line for use to researchers and clinicians at large.http://journal.frontiersin.org/Journal/10.3389/conf.fpsyg.2015.65.00049/fullAgraphiaAphasiaWritingTreatmentspellingScoring |
spellingShingle | Andrew Tesla Demarco Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphia Frontiers in Psychology Agraphia Aphasia Writing Treatment spelling Scoring |
title | Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphia |
title_full | Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphia |
title_fullStr | Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphia |
title_full_unstemmed | Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphia |
title_short | Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphia |
title_sort | whole word response scoring underestimates functional spelling ability for some individuals with global agraphia |
topic | Agraphia Aphasia Writing Treatment spelling Scoring |
url | http://journal.frontiersin.org/Journal/10.3389/conf.fpsyg.2015.65.00049/full |
work_keys_str_mv | AT andrewteslademarco wholewordresponsescoringunderestimatesfunctionalspellingabilityforsomeindividualswithglobalagraphia |