In a study by Vogel, Maruff, and Morgan (2010) 174 Australian and New Zealand speech pathologists were surveyed to determine which type and pattern of assessments was most commonly used within an acute setting. For this study, 'acute' stroke care was defined as the first 30 days post stroke. As such, clinicians who completed the survey worked in the acute setting (42.5% of respondents) , inpatient rehabilitation setting (26.4% of respondents), outpatient rehabilitation setting (25.9% of respondents), private practice (2.9%) or in aged care (2.3%). For language assessments, Vogel et al. (2010) reported that over 70% of participants used an informal (via interaction and observation) language assessment and over 50% used an individualised assessment developed by themselves or the institution in which they worked for. A large percentage of speech pathologists also used the Mt Wilga High Level Language Screening test (78.2%). Other assessments that were commonly used included the Psycholinguistic Assessments of Language Processing (PALPA) (63.8%). the Western Aphasia Battery (63.2%), The Boston Naming Test (63.2%) and Boston Diagnostic Aphasia examination (50.6%). See Table 1 below for a complete list of language assessments used by speech pathologists. For a breakdown of the most commonly used assessments within specific clinical settings, see Table 2.
Table 1: Language assessments use by speech pathologists (Vogel et al., 2010)
Test | Use (%) | Test | Use (%) |
---|---|---|---|
Aachen Aphasia Test | 0.0 | Minnesota Test for Differential Diagnosis of Aphasia | 2.3 |
Acute Aphasia Screening Protocol | 1.1 | Mississippi Aphasia Screening Test | 5.2 |
An individualized assessment developed by yourself or your institution | 51.1 | Mount Wilga High Level Language Screening Test | 78.2 |
Aphasia Language Performance scales | 1.7 | NIH Stroke Scale | 0.6 |
Bedside Evaluation Screening Test | 20.1 | Other | 5.7 |
Boston Diagnostic Aphasia Examination | 50.6 | Porch Index of Communicative Ability | 0.6 |
Boston Naming Test | 63.2 | Psycholinguistic Assessments of Language Processing in Aphasia (PALPA) | 63.8 |
Burden of Stroke Scale | 0.0 | Pyramids and Palm Trees | 2.9 |
Caulfield Language for Cognition | 4.6 | Quick Assessment for Aphasia | 0.6 |
Cognitive Linguistic Quick Test | 1.7 | Reitan-Indiana Aphasia Screening Examination | 0.0 |
Communication Activities for Daily living | 2.9 | ScreeLing | 0.0 |
Communicative Effectiveness Index | 8.0 | Sheffield Screening Test for Language Disorders | 5.7 |
Comprehensive Aphasia Test | 1.7 | Sklar Aphasia Scale | 0.0 |
Frenchay Aphasia Screening Test | 14.4 | Test for Reception of Grammar | 2.9 |
Functional Assessment of Communication Skills for Adults | 0.6 | The Aphasia Screening Test | 13.2 |
Functional Communication Profile | 27.0 | Ullevaal Aphasia Screening Test | 0.0 |
Informal Assessment (via interaction and observation) | 70.1 | Wechsler Individual Achievement Test | 0.6 |
Information Language Processing Screen (ILPS) | 24.1 | Western Aphasia Battery | 63.2 |
Inpatient Functional Communication interview | 10.9 | Whurr Aphasic Screening Test | 2.3 |
LARSP | 0.6 | ||
Measure of Cognitive-Linguistic Abilities | 2.9 | |
Table 2: Popular speech and language assessments as determined by clinical setting (Vogel et al. 2010)
Clinical setting (%) | Most popular language assessment (%) |
---|---|
Acute hospital (42.5) | An individualized assessment developed by clinician or institution (69.4) |
Inpatient rehabilitation (26.4) | Mount Wilga High Level Language Screening Test (93.5); |
Outpatient rehabilitation (25.9) | Mount Wilga High Level Language Screening Test (86.6); |
As discussed by Bruce and Edmundson (2010), there are many tests that can be used to assess people with aphasia. The decision to use a particular assessment depends on the user’s theoretical perspective, their experience, the aims of the assessment process, the goals of therapy, the characteristics of the person with aphasia, the environment, and the time and resources available (Kerr, 1993). Comprehensive language batteries have been discussed throughout the literature as to whether they meet the purpose/s of assessment. Pros and cons of language batteries for aphasia assessment have been highlighted below (as adapted from Bruce and Edmundson's table, 2010, p92).
Table 3: Pros and cons of comprehensive language batteries in aphasia assessment (Adapted from Bruce & Edmundson’s table, 2010, p92)
Pros | Cons |
---|---|
|
|
Table 4: Pros and cons of informal assessments of aphasia
Bland et al. (2013) reported on the adherence to standardised assessments among therapists, including speech pathologists. The authors examined the clinical adherence to a standardised assessment battery and discovered that adherence varied across settings (acute, inpatient rehabilitation, outpatient), professional discipline (physical therapy, occupational therapy, speech pathology), and time of assessment (admission, discharge/monthly). Of the three disciplines, the speech pathologists had the lowest adherence (median .68). Of the settings, the outpatient facility had the lowest adherence across all disciplines. While the article does not provide perspectives of the clinicians as to why they chose to use the assessment in a different manner, challenges of using standardised assessments were mentioned in the Vogel et al. (2010) study and included: being too time consuming, insensitive to change and unable to be repeated with sufficient frequency. Potentially, clinicians see the standardised assessments as more of a guide that requires some adapting to suit their needs and the clinical settings in which they work. It should also be noted that many standardised assessments are not intended to be sensitive to change due to the small numbers of items in sub-tests; their strength is intended to be in informing the diagnostic process.
Vogel et al (2010) conclude that the complex and fluctuating nature of communication in the early stages post stroke requires specialist assessment. However, in view of the lack of a standardised, population-specific tool that meets these needs, they suggest a dynamic assessment procedure is currently most effective. Such an approach has inherent challenges (e.g. subjective judgement, reduced accuracy and sensitivity) and therefore, a framework is necessary in order to interpret findings from informal assessments (Vogel et al., 2010).
Surveys report clinicians are more frequently using impairment-based measures in clinical practice (Rose et al., 2013; Verna, Davidson & Rose, 2009) and potentially under assess functional, activity and QOL aspects of aphasia.
aphasiacre@latrobe.edu.au | |
+61 3 9479 5559 | |
Professor Miranda Rose |