Revisors' Criticisms
A significant amount of the criticism directed at mathematics placement testing is focused on the research which suggests that many other factors, particularly noncognitive or psychosocial factors, are important in determining a student's success in mathematics (Bridgeman and Wendler, 1991; House, 1995; Penny and White, 1998; Ting and Robinson, 1998). These factors may include: self-confidence, commitment, attendance, gender, ethnic background, age or maturity, financial circumstances, self-rating of math ability, parent's education level, motivation, teacher's attitudes, mode of instruction, and teacher's gender (Ting and Robinson, 1998). Critics propose that some of these factors should be assessed in specially designed questionnaires or interviews and used in conjunction with math test scores to make better placement decisions.
Another area of the criticism focuses on the math placement test itself. The tests may need improvements in terms of content and predictive validity, discrimination, reliability and the choice of cut scores (Morante, 1989; Truman, 1992) . Within the same frame also lies the debate over using general achievement tests (such as SAT) versus content-specific basic skills placement tests. After an extensive review of assessment and placement, Jerry Weber (1986) concludes:
Content-specific placement tests in combination with other student data will yield effective assessment forming a basis for placement decisions. Performance on general achievement tests (ACT or SAT) or a subsection of one achievement test should not determine basic skills course placement (p28).
Similarly, Wattenbarger and McLeod (1989) note that studies conducted on Florida colleges show that "...standardized entrance examinations do not provide information of sufficient accuracy to justify placement into the mathematics curriculum based solely on the math portion of the tests (SAT and ACT were used in one study.) ( p18)." Most community colleges and a high proportion of universities do use institutionally created content-specific tests, though about half the universities are likely to rely on SAT or ACT scores (Lederman, et al., 1985; McDonald, 1988).
Finally, the remedial courses offered to upgrade students' math skills are subject to criticism from a number of perspectives. Courses vary widely in content, duration, and mode of delivery. This may simply reflect different needs in different contexts and an effort to be flexible on behalf of students' needs. More significant is the observation that many do not use any special instructional strategies directed at the characteristics of underprepared students (Laughbaum, 1992). The faculty who teach the remedial program may be temporary, less qualified, and not well integrated into the post-secondary mathematics departments to the detriment of their students (Penny and White, 1998).
The placement testing process
Upon enrollment a student will be recommended or required to take placement tests, usually in English or writing, in math and in reading. Testing may also include a computer-scored essay, or an English-as-a-second-language assessment. Students with disabilities may take an adaptive version, such as in an audio or braille format that is compliant with the Americans with Disabilities Act (ADA). Advisors interpret the scores and discuss course placement with the student. As a result of the placement, students may take multiple developmental courses before qualifying for college level courses. Students with the most developmental courses have the lowest odds of completing the developmental sequence or passing gatekeeper college courses such as Expository Writing or College Algebra.[8] Adelman has shown that this is not necessarily a result of developmental education itself.[9]
Student acceptance
Many students do not understand the high-stakes nature of placement testing. Lack of preparation is also cited as a problem.[citation needed] According to a study by Rosenbaum, Schuetz and Foran, roughly three quarters of students surveyed say that they did not prepare for their placement tests.[10]
Once students receive their placement, they either may or must begin taking developmental classes as prerequisites to credit-bearing college level classes that count toward their degree. Most students are unaware that developmental courses do not count toward a degree.[11] Some institutions prevent students from taking college level classes until they finish their developmental sequence(s), while others apply course prerequisites. For example, a psychology course may contain a reading prerequisite such that a student placing into developmental reading may not sign up for psychology until they complete the developmental reading requirement.
Federal Student Aid programs pay for up to 30 hours of developmental coursework. Under some placement regimens and at some community colleges, low-scoring students may require more than 30 hours of such classes.
History
Placement testing has its roots in remedial education, which has always been part of American higher education. Informal assessments were given at Harvard as early as the mid-1600s in the subject of Latin. Two years earlier, the Massachusetts Law of 1647, also known as the "Old Deluder Satan Law," called for grammar schools to be set up with the purpose of "being able to instruct youth so far as they shall be fitted for the university."[12] Predictably, many in-coming students lacked sufficient fluency with Latin and got by with the help of tutors who had graduated as early as In 1849 the University of Wisconsin established country's first in-house preparatory department. Late in the century, Harvard introduced a mandatory expository writing course, and by the end of the 19th century, most colleges and universities had instituted both preparatory departments and mandatory expository writing programs.
According to John Willson,
The chief function of the placement examination is prognosis. It is expected to yield results which will enable the administrator to predict with fair accuracy the character of work which a given individual is likely to do. It should afford a reasonable basis for sectioning a class into homogeneous groups in each of which all individuals would be expected to make somewhat the same progress. It should afford the instructor a useful device for establishing academic relations with his class at the first meeting of the group. It should indicate to the student something of the preparation he is assumed to have made for the work upon which he is entering and introduce him to the nature of the material of the course.
Historically, the view that colleges can remediate abilities that may be lacking was not universal. Hammond and Stoddard wrote in 1928: "Since, as has been amply demonstrated, scholastic ability is, in general, a quite permanent quality, any instrument that measures factors contributing to success in the freshman year will also be indicative of success in later years of the curriculum."[15]
Entrance examinations began with the purpose of predicting college grades by assessing general achievement or intelligence. In 1914 T.L. Kelley published the results of his course-specific high school examinations designed to predict "the capacity of the student to carry a prospective high school course."[16] The courses were algebra, English, geometry and history, with correlations ranging from R =.31 (history) to .44 (English).
Entrance examinations and the College Entrance Examination Board (now the College Board) allowed colleges and universities to formalize entrance requirements and shift the burden of remedial education to junior colleges in the early 20th century and later to community and technical colleges.[17]
Policies
Required placement testing and remediation was not always considered desirable. According to Robert McCabe, former president of Miami-Dade Community College, at one time "community colleges embraced a completely open policy. They believed that students know best what they could and could not do and that no barriers should restrict them....This openness, however, came with a price....By the early 1970s, it became apparent that this unrestricted approach was a failure"[18]
Examples of state or college placement testing policies:
Placement testing using state approved tests is required (or encouraged) for all student (or all students taking classes for credit, or all new students taking classes for credit)
Students must meet approved cut scores to gain access to specific courses
Placement testing waived for students demonstrating college readiness via admissions tests (typically high scores on ACT or SAT tests, such as 21 plus or minus in relevant subjects on ACT, and 500 plus or minus in relevant subject areas on SAT), other approved placement tests, or previous college coursework in math and English
Students allowed/required to retest after/within a certain length of time (sometimes for a fee).
Students must begin remedial coursework within a specified time period.
Before testing/retesting students are encouraged/required to review study guides or complete a review course.
Cut score levels, roles and reviews are described.
Remedial students encouraged/required to take diagnostic assessments before/during their coursework
Integration of criteria beyond test scores into remediation decision-making.
Requiring completion of Students may not register for college level classes until they have completed all (or certain) prescribed remedial courses
Defining remedial prerequisites such as placement test score or remedial coursework for specific courses.
Alternatives
Testing other elements of student ability
Conley recommends adding assessments of contextual skills and awareness, academic behaviors, and key cognitive strategies to the traditional math, reading and traditional tests[1] Boylan proposes examining affective factors such as "motivation, attitudes toward learning, autonomy, or anxiety."[19]
Alternative test formats
In 1988, Ward predicted that computer adaptive testing would evolve to cover more advanced and varied item types, including simulations of problem situations, assessments of conceptual understanding, textual responses and essays.[20]: 6–8 Tests now being developed incorporate conceptual questions in multiple choice format (for example by presenting a student with a problem and the correct answer and then asking why that answer is correct); and computer-scored essays such as e-Write, and WritePlacer.
In a Request for Information on a centralized assessment system, the California Community Colleges System asked for "questions that require students to type in responses (e.g. a mathematical equation)" and for questions where "Students can annotate/highlight on the screen in the reading test."[21] Some massive open online courses, such as those run by edX or Udacity, automatically assess user-written computer code for correctness.[22]
Diagnostic placement testing
Placement testing focuses on a holistic score to decide placement into various levels, but is not designed for more specific diagnoses. Increasing diagnostic precision could involve changes to both scoring and test design and to better targeted remediation programs, where students focus on areas of demonstrated weakness within a broader subject.[citation needed]
"The ideal diagnostic test would incorporate a theory of knowledge and a theory of instruction. The theory of knowledge would identify the student's skills and the theory of instruction would suggest remedies for the student's weaknesses. Moreover, the test would be, in a different sense of the word from what we have previously employed, adaptive. That is, it would not subject students to detailed examinations of skills in which they have acceptable overall competence or in which a student has important strengths and weaknesses—areas where an overall score is not an adequate representation of the individual's status."[20]: 5
Test preparation
A controversy exists over the value of test preparation and review. Test publishers maintain that their assessments should be taken without preparation, and that such preparation will not yield significantly higher scores. Test preparation organizations claim the opposite. Some schools have begun to support test preparation.
The publishers' claims are partly based on the theory that any test a student can prepare for does not measure general proficiency. Institutional test preparation programs are also said to risk washback, which is the tendency for the test content to dictate the prior curriculum, or "teaching to the test".[23] Various test preparation methods have shown effectiveness: test-taking tips and training, familiarity with the answer sheet format along with strategies that mitigate test anxiety.[24]
Some studies offer partial support for the test publishers' claims. For example, several studies concluded that for admissions tests, coaching produces only modest, if statistically significant, score gains.[25][26] Other studies, and claims by companies in the preparation business were more positive.[27] Other research has shown that students score higher with tutoring, with practice using cognitive and metacognitive strategies and under certain test parameters, such as when allowed to review answers before final submission, something that most computer adaptive tests do not allow.[28][29][30]
Other research indicates that reviewing for placement tests may raise scores by helping students to become comfortable with the test format and item types. It also might serve to refresh skills that have simply grown rusty. Placement tests often involve subjects and skills that students haven't studied since elementary or middle school, and for older adults, the might be many years between high school and college. In addition, students who attach a consequence to test results and therefore take placement tests more seriously are likely to achieve higher scores.[31]
According to a 2010 California community college study, about 56% of colleges did not provide practice placement tests, and for those that did, many students were not made aware of them. In addition, their students "did not think they should prepare, or thought that preparation would not change their placement."[32]
By 2011, at least three state community colleges systems (California, Florida, and North Carolina), had asked publishers to bid to create customized placement tests, with integrated test reviews and practice tests. Meanwhile, some individual colleges have created online review courses complete with instructional videos and practice tests.
Simulations
In "Using Microcomputers for Adaptive Testing," Ward predicted the computerization of branching simulation problems, such as those used in professional licensing exams.[20]
Secondary/tertiary alignment
Since placement testing is done to measure college readiness, and high schools in part prepare students for college, it only makes sense that K-12 and higher education curricula be aligned. Such realignment could take many forms, including K-12 changes, collegiate changes or even collaboration between the two levels. Various efforts to improve education may undertake this challenge, such as the national K-12 Common Core State Standards Initiative in the United States, Smarter Balanced Assessment Consortium (SBAC), or the Partnership for Assessment of Readiness for College and Careers (PARCC).
As of 2012 neither kind of alignment has progressed to the point of close coordination of curriculum, assessments, or learning methodologies between public school systems and systems of higher education. Recently, state legislatures (including California, Florida, and Connecticut) passed a series of mandates to redefine developmental curricula. This was done in response to diminishing four-year graduation rates in college.
Succeeding in a college course requires students to fulfill a multitude of tasks in order to demonstrate their competency in a given class. Frederick's Ngo's study of Multiple Measures further criticized the use of placement tests by considering that “college readiness is a function of several academic and non-academic factors that placement tests do not adequately capture”.[33] Furthermore, Belfield and Crosta's 2012 study establishes “positive but weak association between placement test scores and college GPA”.[33] Defining key skills and attributes that lead to college success cannot be simply extrapolated from performance on a single placement test.
Scott-Clayton claims “it is easier to distinguish between those likely to do very well and everyone else than it is to distinguish between those likely to do very poorly and everyone else”.[34] This exacerbates the issue of the placement test as it highlights the fact that those who do well on the placement test have a high probability (with high predictive validity) of succeeding in college-level coursework. Meanwhile, those who do poorly on the placement test aren't necessarily put on a trajectory that has predictive validity. Regardless, those who start in remediation, end up becoming victims to the vicious cycle of being stuck in remediation.
Do'stlaringiz bilan baham: |