New Accountability Rules Pose Dilemma for Programs
Adult Literacy Resource Institute,
Adult basic education programs in
Massachusetts may be faced with some difficult choices these days
as they work to comply with the requirements of the new National
Reporting System (NRS) and the state's SMARTT data management system.
One of these choices involves the assessment and reporting of individual
student progress in literacy and language learning. (Other choices
revolve around different issues, such as the reporting of individual
student goals, but this article will focus on the reporting of students'
As of July 1 , the NRS now requires that each state and hence
each federally-funded ABE/ESOL/GED program report the progress of
its adult learners in measurable, quantifiable terms, using two
"ladders" of six levels each, one built for ESOL and one for literacy/ABE/GED.
In Massachusetts, the state Department of Education (DOE) anticipated
this requirement by building into its SMARTT data management system
the requirement that programs report student progress for all students.
On the ESOL side, this means assessing students in terms of the
six Student Performance Levels (SPL) adopted by the NRS for its
own reporting. On the ABE side, DOE is requiring that programs report
in terms of grade level equivalents (GLEs) from 1 to 12, which DOE
will then translate into the six NRS levels for state reporting
purposes. Programs are not required to use standardized tests to
arrive at these SPLs or GLEs, but if they elect to use an alternative
measure, they must correlate the results of this alternative measure
with the SPL or GLE ladder and, eventually, provide proof of the
validity and reliability of these correlations.
These student assessment requirements, as mandated by the NRS and
implemented by SMARTT, can present programs with some difficult
choices in how to conduct their assessment processes so as to meet
two goals that are at least somewhat in conflict: 1) meeting these
new reporting requirements; and 2) providing teachers and students
with assessment information that is meaningful, accurate, and useful.
This article will review the three basic options that it seems adult
basic education programs now have regarding assessment.
The first of these options is for a program simply to use standardized
tests for virtually all of its student assessment. The basic advantage
of this approach, as everyone knows, is that it is rather easy to
do -- not an insignificant reason. I would argue, though, it also
carries a number of serious disadvantages. The first is that standardized
tests simply do not appear to be very good ways of assessing the
reading, writing, and math abilities of students, and especially
of adult learners. The literature on this is vast and I won't go
into the specifics here, other than to point to the many articles
and books written by Susan Lytle, Marcie Wolfe, Marilyn Gillespie,
Elsa Auerbach, Peter Johnston, and many others over the past two
decades or more, criticizing standardized methods of assessing learning
and promoting various types of alternative assessment. (Local references
would include the Fall 1988 issue of Focus
on Basics, and the numerous volumes of Adventures in
Assessment published by SABES/World Education. The ALRI has
many resources and lists of resources on alternative assessment,
for those who are interested.)
A second disadvantage, which could at least partly derive from
the first, is that standardized tests may do a very poor job of
capturing and reflecting the learning that goes on in adult basic
education classes. In a recent posting to the NLA (National Literacy
Advocacy) electronic list, Thomas Sticht discusses a new study by
Janet K. Sheehan-Holt and M. Cecil Smith, which finds little improvement
in scores on the NALS (National Adult Literacy Survey) test by adults
participating in ABE classes. It may thus prove to be a major risk
for adult basic education programs across the country and for the
system as a whole to be judged largely on the basis of students'
improvement in scores on tests that may be inherently incapable
of capturing much of the learning that is taking place for these
students at these programs.
The third disadvantage is that, despite the literally hundreds
of tests that have been produced in this country, very few of these
are developed specifically for use with adult learners, and there
may be certain portions of our adult learner population for whom
no test is appropriate. For example, ESOL teachers have indicated
that the BEST test, which is used almost universally for determining
SPL levels with non-native-English speakers, was originally developed
for use with certain refugee populations and is not necessarily
appropriate for some other ESOL populations, especially students
at higher levels.
A fourth disadvantage is that all assessments must be rendered
in terms of either SPLs (for ESOL) or GLEs (for ABE). I can't really
speak to how well the SPL [adder works to reflect students' English
language achievement. However, the use of GLEs to report ABE progress
is certainly problematic, though it may be mechanically easy enough
to do. Quoting briefly from a few sources:
"Problems with grade level completion criteria for literacy
statistics are well documented (e.g. Coles, 1976)." (Hannah
Arlene Fingeret, Adult Literacy Education: Current and Future
Directions, ERIC, 1984, p. 8).
"Although the problems with grade levels as indicators of adult
performance and progress are well-established, their use in
the field of adult literacy is surprisingly pervasive." (Susan
Lytle, Thomas Marmor, and Faith Penner, paper presented in 1986).
"Critics of the use of grade levels, however, point out that
there is no valid translation indicating what real world literacy
skills correspond to completion of a certain number of years
in school." (Carolyn Chase Ehringhaus, Adult Education Quarterly,
1990, vol. 40, no. 4, p. 189).
"Test results that give grade level scores or indicate that
learners can identify specific skills on paper-and-pencil tasks
yield very limited information. Despite the fact that our society
in general seems quite impressed with measurable results that
can be reported numerically, such data fail to match the overall
goals. The assumption that numerical scores give evidence of
confidence and competence is highly questionable." (Rena Soifer,
et al., The Complete Theory to Practice Handbook of Adult
Literacy, Teachers College Press, 1990, p. 171).
So, while using standardized tests as the sole means of assessment
may be relatively easy, there are numerous other difficulties and
risks associated with that route. A second possible assessment option
for programs is to use various means of alternative assessment and
to translate the results of these assessments into GLEs and SPL.s.
The major advantage to this approach is a very important one: it
would provide assessment information that creates a much fuller
picture of a student's literacy abilities that is likely to be more
meaningful and much more useful to teachers and students alike.
There are again, however, several likely disadvantages as well.
The first is the time and energy it would take to create or adapt
these methods of alternative assessment for use at a particular
program with a particular population of students. It should be noted,
though, that a great deal of work has already been done in this
area (see, for example, the various Adventures in Assessment volumes)
and more could be supported by targetted funding from the state
Department of Education. Secondly, there will be the difficulty
of proving to a sufficient degree the validity and reliability of
these measures, though obviously the criteria set for achieving
this level of proof will in large part determine how difficult this
task will be for individual programs. Again, this difficulty could
be mitigated through collaboration on the part of various programs
and the support of DOE funding.
A third disadvantage is found in the requirement that these alternative
assessments must be translated into SPLs or GLEs. Alternative assessment
is not merely another way of getting to the same place; it is also
to some degree a different destination. Alternative assessment is
based on a view of literacy and learning that doesn't see learning
to read and write and do math as activities that can be laid out
in a neat, sequential series of skills through which all learners
progress from bottom to top. Alternative assessment approaches attempt
to create a picture of a learning process that is by its very nature
non-linear and that can vary tremendously from person to person.
Having to translate, at least on the ABE side, alternative measures
of assessment into GLEs certainly acts to negate the original intent
and meaning and value of the alternative assessment process.
A third option which programs have is to combine elements of the
first two (including their advantages and disadvantages) by using
both standardized tests and alternative assessments. This hybrid
option would use standardized test results to meet the requirements
of the new reporting system in a relatively easy way, while using
an alternative assessment approach to provide meaningful and useful
information to teachers and students. This option would still require
programs' time and effort to develop alternative assessments and
would still run the risk of not capturing for reporting purposes
the actual learning that is going on in classes. Nevertheless, this
option may be the best of those available.
In the long run, we as a field will need to "assess" how well the
new approaches to assessment and accountability -- the NRS and SMARTT
systems -- are capturing and reflecting the learning that students
achieve as they attend our classes.
Originally published in Adventures in Assessment,
Volume 13 (Spring 2001),
SABES/World Education, Boston, MA, Copyright 2001.
Funding support for the publication of this document
on the Web provided in part by the Ohio State Literacy Resource
Center as part of the LINCS
Assessment Special Collection.