placeholder

If You Could Wave a Magic Wand

by Pat Campbell

If you could wave a magic wand, what changes, if any, would you make to assessment practices in your program? In the spring of 2005, this was one of 42 questions posed to 400 educators who participated in an online survey about student assessment. This article shines a spotlight on the changes that respondents indicated they would like to make with respect to assessment practices. Data analysis revealed that these changes revolved around two key areas: human and material resources, and assessment processes and products.

The Sample

The survey respondents worked in adult literacy and basic education programs situated across Canada—from Dawson City, Yukon, to St John’s, Newfoundland, and from as far south as Windsor, Ontario, to as far north as Grise Fiord on Ellesmere Island in the Northwest Territories. The number of respondents who worked in programs delivered by community-based agencies and colleges was almost equivalent, 44 and 40 per cent respectively. A smaller percentage worked in a program offered by a school board (11 per cent) or workplace (five per cent). The programs served a broad cross-section of students, from beginning readers to students seeking their grade 12.

If a delivery agency in a given jurisdiction had fewer than 30 programs, all of them were asked to participate in the survey. In order to ensure a representative sample, 50 per cent of the programs were randomly sampled whenever a delivery agency in a given jurisdiction had more than 30 programs.

Although the respondents filled multiple roles as adult educators, the majority of respondents (64 per cent) reported that they were program coordinators or directors. Seventy-eight per cent of the respondents were female, reflecting the gender bias common in the field. Their ages ranged from 18 to 74, and the highest percentage of respondents (41 per cent) were in the 45-to-54 age group. Their hours of paid time per week ranged from less than 10 to more than 40: the largest cohort (43 per cent) worked between 31 and 40 hours.

Fifty-five percent of the respondents had worked in the field of adult literacy for nine years or more. The respondents were well educated, with over one half (55 per cent) holding a bachelor of education degree or diploma, 24 per cent holding a master’s degree, and one per cent a doctoral degree. Only four per cent of the respondents did not have a post-secondary certificate, diploma or degree. Slightly over one-half of the respondents (56 per cent) had taken university or college credit courses that focused on assessment.

Findings

Human Resources

Capacity

According to Merrifield (1998), in order to meet the demands of accountability, delivery agencies that provide educational services need the capacity to perform—that is, to achieve the performance goals and to be able to be accountable through having the resources to document achievements. The findings from this survey indicated that many respondents are mandated by funding agencies to conduct comprehensive, ongoing and exit assessments, yet they do not have the capacity to fulfill this mandate. In fact, one in three respondents reported they do not have the time to administer and interpret assessments and write reports.

One woman commented on the “huge time factor involved in planning appropriate ongoing and exit assessments.” This, coupled with the fact that many students leave midway through the program without notice, makes it difficult to use assessments to monitor progress. It is also challenging to make assessment a priority when there are so many competing responsibilities, duties and pressures that consume and impinge upon an educator’s time. The following statement from the director of an adult basic education program in a small rural college represents the multifaceted roles of many practitioners in all locales:

I feel that my initial assessments are good, but since I am responsible for every aspect of the program from administration, assessment, training, tutor training, matching, goal setting, plans,  information and referral etc., I find that my ongoing and exit assessments are, therefore, sometimes lacking.

While many respondents wanted more release or paid time for existing staff to administer and interpret assessments, others wanted to hire one person to conduct initial, ongoing and exit assessments. The issue of capacity was also cited as a recurring barrier with respect to implementing performance measurement in a survey that was conducted by the Ontario Literacy Coalition in 2002.

Communication channels

Some colleges and school districts do have a testing or counselling centre where one person is assigned to administer intake assessments. In a few instances, colleges have a person within the adult basic education department who is responsible for assessment. A few of the respondents who worked in testing or counselling centres expressed the need for more consultation with ABE instructors in order to ensure individualized instruction based on the assessment. On the other side of the coin, some of the instructors wanted the assessors in testing or counselling centres to arrange case conferences and to share test results with faculty in the form of teaching and learning strategy recommendations. One respondent stated, “there needs to be more discussion about potential students between the assessment officer and the instructor and/or chair who does the interviewing.” This suggests that having one person assigned to assessment doesn’t always ensure that instructors and students will receive the information they need to teach and learn. In addition to having an assessment or counselling centre, post-secondary institutions also need effective communication channels between assessors and instructors.

Referrals

Usually, adult basic education practitioners do not have the qualifications to diagnose learning disabilities. Consequently, many respondents want the financial resources to access experts to conduct psycho-educational assessments, or they want sufficient resources to contract professionals to determine specific learning requirements and challenges. In summary, programs need the resources to make referrals when specialized assessments are required.

Material Resources

The majority of respondents spoke of access to assessment tools and professional development (PD) in the same breath. While many respondents use informal assessment tools, others want to use commercial tools. Choosing appropriate commercial assessment tools can be a daunting task. First, one needs to know what is available. Second, one needs the funds to purchase these tools. Following the purchase of new tools, educators must deal with the next hurdle—learning to use the instrument. The complexity of the assessment instrument will dictate the amount of training educators will require in order to ensure accuracy and  reliability during administration, scoring and interpretation. According to the survey findings, respondents need the material resources of time and funding to access assessment tools and professional development.

The respondents expressed a desire for a resource library of assessment tools or access to a diverse range of materials. One respondent from a community-based program lamented, “I realize all the resources that are available but the time to study and implement them just is not available given the hours the program works on and the other needs that must be slotted into those hours.” Practitioners need time to explore and familiarize themselves with other resources.

The respondents want training to gain or enhance their knowledge about specific assessment tools, to learn about recent studies on assessment theories and methodology, to receive confirmation that their assessment practices are adequate, and to ensure that they “haven’t developed any bad habits or shortcuts.” They also expressed a desire for networking sessions with their colleagues to “discuss and share resources pertaining to assessment.” Specifically, the respondents want to learn about the range of assessment tools that are “on the market, what they use, how they use them and when, and what are the best tools to use to determine reading levels, writing levels and math levels.”

While assessment can be learned through trial and error, assessment is also a socially constructed practice that needs to be learned through dialogue and reflection with colleagues. The findings indicate that the respondents prefer PD activities that allow face-to-face interaction with individuals and groups. At the aggregate level, educators selected workshops, in-service and access to resource people or expertise as their top three PD preferences for learning about assessment. Accessing resource people or expertise differs from workshops and in-service in two ways. First, this option allows for observation and feedback: for example, a resource person could observe a practitioner administering an assessment and then provide feedback. One respondent confirmed this by stating that on-site coaching serves “to validate the assessor’s proper use of meaningful tools.” Second, this option allows for an ongoing process rather than a one-shot event. In fact, the majority (63 per cent) of respondents who chose accessing resource people as their preference indicated that they wanted ongoing access to professional development.

Mentoring, although a popular choice in five jurisdictions, appears to be an under-utilized option, considering that it can occur within the program, making it more convenient for those with limited time and budgets for travel and participation. Mentoring is also a practical pathway for learning about formal and informal approaches to assessment as it employs observation, responsive feedback and reflection. Some researchers believe that mentoring is a good choice for PD because it can help educators acquire a “change orientation rather than just adopt new techniques” (Smith, Hofer, Gillespie, Solomon and Rowe, p. 3). Perhaps educators who engage in a mentoring process might begin to question their assumptions about assessment, which, in turn, might lead to changes in the ways they practice assessment.

Thirty-three per cent reported that they do not have time to administer, interpret, report and/or follow up assessments. This raises the question: “What is the point of engaging in professional development on assessment if one does not have the time to utilize what he/she has learned?” Professional development is effective only when practitioners have the time to practice, dialogue and reflect upon their new knowledge. Simply put, until the issue of capacity is addressed, professional development on assessment will not lead to more effective practice. The words of one respondent sum up the dilemma: “The Ministry expects us to do it [assess], but never provides enough funding.” Funders need to ensure that educators have the capacity to respond to what is learned through professional development.

The assessment Process

Comprehensive assessments

Many respondents expressed a desire to administer comprehensive, in-depth intake assessments with individuals, rather than conducting group assessments. They wanted to analyze and interpret the assessment protocols in order to make informed decisions about instruction and design learning plans tailored to the individual’s needs. Further, they wanted time to discuss the assessment results with the students and provide an opportunity for students to ask questions. The data indicated that time was the primary barrier preventing people from conducting comprehensive assessments and providing feedback to the student.

Stages of assessment

Assessment can strike fear into the hearts of students because tests conjure up negative experiences in the K-to-12 school system. Yet, intake assessment continues to be the first step in the registration process for many upgrading programs. Several survey respondents did not want the initial contact to include assessment because it can discourage prospective learners, and it “puts up barriers and resistance.” One woman, who worked in a community-based program, wrote: “I would allow a longer ‘get-to-know-you’ time frame before the assessment testing is completed.” Another respondent who worked in the correctional system wanted “a process where the inmate would be stabilized before being assessed.” The practical considerations of postponing assessment, however, are particularly difficult in colleges dealing with a large intake of students: in these situations determining placement in an adult basic education class is a priority in the registration process.

The data clearly indicated that intake assessments were administered more frequently than were ongoing and exit assessments. Among the 400 respondents, 91 per cent conducted intake assessments, 71 per cent ongoing, and 47 per cent exit. The instructors wanted the opportunity to measure progress, particularly through ongoing assessments, “on an as-needed basis, instead of an as-time-allows basis.” In order to measure progress in a reliable manner, the respondents noted that assessment tools need to have parallel forms for pre- and post-testing. A few respondents noted that a tracking or record-keeping system would assist in documenting and monitoring progress.

The Assessment Product

The respondents spoke of the qualities they wanted in an assessment tool. Data analysis revealed four commonly cited qualities: being useful, user-friendly, current and culturally sensitive.

Useful

Many respondents were searching for the “perfect” assessment tool—a “foolproof instrument with 99.9 per cent accuracy in results.” According to one respondent, this tool will “guarantee that my initial placement and individualized instruction will always be right for the student in question. Regardless of what  assessment tool I use, there is always an element of hit and miss.” The findings indicate that respondents want reliable and diagnostic intake tools that determine placement and inform instruction, thereby  optimizing teaching and learning. Instructors want ongoing assessment tools that reveal how the students are doing and what to do next. They want assessment instruments to yield useful data that will “mean something” to instructors, students and funders.

User-friendly

The respondents emphasized that they wanted a user-friendly assessment tool—one that was simple to administer, score and interpret. The need for a simple, easy-to-use tool appears to stem from two primary factors: time and expertise. For example, many of the instructors in post-secondary institutions assess students during class time, making the need for a user-friendly tool a necessity. And, while 80 per cent of the survey respondents held a bachelor’s degree or higher, 44 per cent had not taken a credit course focusing on assessment.

Current

A common request on the respondents’ wish list was for updated assessment tools relevant to the curriculum and the student population. The findings show that the most frequently used standardized assessment tool—the Canadian Adult Achievement Test (CAAT)—was published in 1986 and has not been revised. One respondent, who coordinates adult basic education programs for a school district that uses CAAT, expressed these concerns with older tests:

1. Sometimes they no longer match a curriculum that is relevant to the students’ needs.

2. Sometimes the teacher modifies the curriculum to match the test.

3. Students may have access to old copies of the tests (or to students who have taken it previously), bringing validity into question.

In addition to these three points, older tests are usually based on outdated reading theories. CAAT, for example, is based on the text-based model of reading, rather than on a social constructivist or new literacies model. In fact, in spite of changes in reading theories, there has been little change in either the basic content or the format of standardized assessments since the 1930s.

Culturally sensitive

Bias occurs in testing when items systematically measure differently for ethnic, gender or age groups. Many of the respondents commented that the tests they used contained cultural bias, particularly toward First Nations and English as a second language students. One respondent noted that “the CAT II has cultural biases that do not measure First Nations’ traditional knowledge and generally First Nations students place at a lower level than necessary with the CAT II.” If educators use assessments that contain bias toward specific populations, the students’ scores will probably be deflated and not reflect their true abilities.

Due to the diversity of students attending adult basic education programs, instructors want to use assessment tools that are “fair” and without “bias.” The respondents stressed that all tools need to be geographically and culturally sensitive, with respect to First Nations populations and visible minority groups who have taken English as a second language. Many students reside in remote areas, which means that they experience test items that are geographically biased. For example, consider a test item that asks questions about paying parking tickets. Would this be relevant to students who live in an isolated hamlet in the Territories or rural areas where parking tickets are non-existent? However, according to Johnston, bias is always embedded in assessments. Johnston writes that “because of the cultural nature of literacy, it is not possible to create an unbiased literacy test; tests always privilege particular forms of language and experience” (p. 98). Despite Johnston’s claim, test developers are not off the hook when it comes to developing culturally sensitive assessment tools. Test developers have a responsibility to reduce bias in tests by analyzing item data separately for different populations and then identifying and discarding items that appear to be biased.

In Closing

In an ideal world, adult educators would have secure employment and benefits, along with paid access to professional development opportunities, consultants and resources. Moreover, they would be able to network with colleagues and would have opportunities to share their beliefs and ideas about assessment. However, the world of adult literacy educators is less than ideal, making it quite challenging to engage in best practices with respect to student assessment.

In an ideal learning environment, assessment tools would be valid and reliable instruments that reflect current literacy and numeracy theories and curriculum. Moreover, they would be normed on an adult population and free of bias. Why do governments mandate certain tests when they fall short of this set of criteria? How can outdated assessments accurately portray the student’s levels of proficiency and be used to inform instruction? The adult literacy community would benefit from the development of new instruments to assess the adult student population. Prior to investing in the development of these tools, governments should establish a national committee to determine standards and principles for test development.

If funders require programs to assess students to determine measurable gains, then this requirement must be accompanied by funding to support capacity. Funders need to invest in the capacity of local programs to collect, interpret and use data to monitor how well programs and students are doing and to improve services. Resources need to be allocated to programs that are commensurate with accountability expectations. If funders want a highly trained workforce that is knowledgeable about assessment practices, they need to ensure that practitioners have the time to practice, dialogue and reflect upon their new knowledge. Funders need to ensure that educators have the capacity to respond to what is learned through professional development. In summary, we need an adult learning system built upon a strong, sustainable infrastructure.

Pat Campbell as director of the project that included this survey. The three-year project on assessment practices was funded by the National Literacy Secretariat and sponsored by the Centre for Education and Work. Results of the project are included in Measures of Success: Assessment and Accountability in Adult Basic Education, now available from Grass Roots Press. For more information about the project, contact Pat at 780-448-7323.

SOURCES:

Campbell, Pat (2006). Student Assessment in Adult Basic Education: A Canadian Snapshot. Winnipeg, MB: Centre for Education and Work.

Johnston, Peter (1998). The Consequences and the Use of Standardized Tests. In Sharon Murphy, Patrick Shannon, Peter Johnston and Jane Hansen (eds.), Fragile Evidence: A Critique of Reading Assessment. Mahwah, NJ: Lawrence Erlbaum Associates pp. 89-101.

Merrifield, Juliet (1998).  Contested Ground: Performance accountability in adult basic education. NCSALL Reports #1. Cambridge, MA: The National Center for the Study of Adult Learning and Literacy, Harvard Graduate School of Education.

Ontario Literacy Coalition (2002). Survey on Common Assessment and Learning Outcomes. Toronto.

Smith, Christine, Judy Hofer, Marilyn Gillespie, Marla Solomon and Karen Rowe (2003). How Teachers Change: A Study of Professional Development in Adult Education. NCSALL Reports #25a. Cambridge, MA: The National Center for the Study of Adult Learning and Literacy, Harvard Graduate School of Education.

 
placeholder
placeholder placeholder placeholder
placeholder
Valid HTML 4.01! Valid CSS! placeholder Creative Commons License This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 2.5 Canada License
.