Curriculum evaluation is the assessment of the merit and
worth of a program of studies, a field of study, or a course of study. Also
Curriculum evaluation can be defined as a judgment regarding the qualities or
values of the response, product or performance based on the established
criteria or curriculum standards.
Curriculum Evaluation models refers to the graphical
representation of the actual process of judging the response, product or
performance of the curriculum based on the established criteria.
The following are the most common models of Curriculum
evaluation process;
Bradley’s
Effectiveness Model
Bradley’s (1985) book Curriculum Leadership and Development
Handbook provides 10 key indicators that can be used to measure the
effectiveness of a developed curriculum. He designed a chart to help in identifying
10 indicators to appraise curriculum effectiveness school building. To assess
how schools or district meets each of the indicators, respond with a Yes or No
in the column provided.
Indicator Description Yes or No Vertical curriculum
continuity The course of study enables teachers to have quick and constant
access to what is being taught in the grade levels below and above them. Also,
upward spiraling prevents undue or useless curricular repetition. Horizontal
curriculum continuity The course of study developed provides content and
objectives that are common to all classrooms of the same grade level. Also,
daily lesson plans reflect a commonality for the same grade level. Instruction
based on curriculum Lesson plans are derived from the course of study, and
curriculum materials used are correlated with the content, objectives, and
authentic tasks developed. Curriculum priority Philosophical and financial
commitments are evident. Clerical assistance is provided and reasonable
stipends are paid to teachers for work during the summer months. In addition,
curriculum topics appear on school board agendas, administrative meeting
agendas, and building-staff meeting agendas.
EXHIBIT 12.1 Bradley’s Effectiveness Model for Curriculum
Development Indicators
CHAPTER 12
Curriculum Evaluation 359
Indicator Description Yes or No Broad involvement Buildings
in the district have teacher representatives on the curricular committees;
elementary, middle level or junior high, and high school principals (or
designees) are represented; and school board members are apprised of and
approve the course of study. Long-range planning Each program in the district
is included in the 5-year sequence and review cycle. Also, a philosophy of
education and theory of curriculum permeate the entire school district.
Decision- making clarity Controversies that occur during the development of a
program center on the nature of the decision, and not on who makes the
decision. Positive human relations Also, the initial thoughts about the
curriculum come from teachers, principals, and the curriculum leader. All
participating members are willing to risk disagreeing with anyone else;
however, communication lines are not allowed to break down. Theory-into-
practice approach The district philosophy, vision, mission, exit (graduation)
outcomes, program philosophy, rationale statement, program goals, program
objectives, learning outcomes, and authentic tasks are consistent and
recognizable. Planned change Tangible evidence shows that the internal and
external publics accept the developed program course of study for the school
district. The process of developing a course of study for each program or
discipline in a school district is no longer one of determining how to do it,
but one of determining how to do it better. If any of the 10 indicators are
identified with a No (negative), consideration should be given to make it a Yes
(positive) indicator.
SOURCE: The 10 indicators of effective curriculum
development were adapted from Curriculum Leadership and Development Handbook
(pp. 141–146), by L. H. Bradley, 1985, Englewood Cliffs, NJ: Prentice Hall.
The indicators for effective curriculum development
represent working characteristics that any complex organization must have in
order to be responsive and responsible to its clients. Further, the measurement
can be oriented to meet the needs of any school district— from large to
small—and it can focus on a specific evaluation of a district’s curriculum
area, such as reading, language arts, math, or any content area designated. The
models (Tyler’s objectives-centered model; Stufflebeam’s context, input,
process, product model; Scriven’s goal-free model; Stake’s responsive model,
and Eisner’s connoisseurship model) presented below give some support to
Bradley’s effectiveness model.
Tyler’s
Objectives-Centered Model
One of the earliest curriculum evaluation models, which
continues to influence many assessment projects, was that proposed by Ralph
Tyler (1950) in his monograph Basic Principles of Curriculum and Instruction.
As explained in this work and used in numerous
PART III CURRICULUM
MANAGEMENT360
large-scale assessment efforts, the Tyler approach moved
rationally and systematically through several related steps:
1. Begin with the
behavioral objectives that have been previously determined. Those objectives
should specify both the content of learning and the student behavior expected:
“Demonstrate familiarity with dependable sources of information on questions
relating to nutrition.” 2. Identify the situations that will give the student
the opportunity to express the behavior embodied in the objective and that
evoke or encourage this behavior. Thus, if you wish to assess oral language
use, identify situations that evoke oral language. 3. Select, modify, or
construct suitable evaluation instruments, and check the instruments for
objectivity, reliability, and validity. 4. Use the instruments to obtain
summarized or appraised results. 5. Compare the results obtained from several
instruments before and after given periods in order to estimate the amount of
change taking place. 6. Analyze the results in order to determine strengths and
weaknesses of the curriculum and to identify possible explanations about the
reason for this particular pattern of strengths and weaknesses. 7. Use the
results to make the necessary modifications in the curriculum. (as cited in
Glatthorn, 1987, p. 273)
The Tyler model has several advantages: It is relatively
easy to understand and apply. It is rational and systematic. It focuses
attention on curricular strengths and weaknesses, rather than being concerned
solely with the performance of individual students. It also emphasizes the
importance of a continuing cycle of assessment, analysis, and improve- ment. As
Guba and Lincoln (1981) pointed out, however, it suffers from several
deficiencies. It does not suggest how the objectives themselves should be
evaluated. It does not provide standards or suggest how standards should be
developed. Its emphasis on the prior state- ment of objectives may restrict
creativity in curriculum development, and it seems to place undue emphasis on
the preassessment and postassessment, ignoring completely the need for
formative assessment. Similarly, Baron and Boschee (1995), in their book
Authentic Assessment: The Key to Unlocking Student Success, stress that “we are
encountering funda- mental changes in the way we view and conduct assessment in
American schools” (p. 1). And “sixty years have passed since we experienced
such a deep-seated and thoughtful revaluation of our assessment methods” (p.
1).
Stufflebeam’s
Context, Input, Process, Product Model
These obvious weaknesses in the Tyler model led several
evaluation experts in the late 1960s and early 1970s to attack the Tyler model
and to offer their own alternatives.
CHAPTER 12
Curriculum Evaluation 361
The alternative that had the greatest impact was that
developed by a Phi Delta Kappa com- mittee chaired by Daniel Stufflebeam
(1971). This model seemed to appeal to educational leaders because it
emphasized the importance of producing evaluative data for decision making; in
fact, decision making was the sole justification for evaluation, in the view of
the Phi Delta Kappa committee. To service the needs of decision makers, the
Stufflebeam model provides a means for generating data relating to four stages
of program operation: context evaluation, which continuously assesses needs and
problems in the context to help decision makers deter- mine goals and
objectives; input evaluation, which assesses alternative means for achiev- ing
those goals to help decision makers choose optimal means; process evaluation,
which monitors the processes both to ensure that the means are actually being
implemented and to make the necessary modifications; and product evaluation,
which compares actual ends with intended ends and leads to a series of
recycling decisions. During each of these four stages, specific steps are
taken:
•• The kinds of decisions are identified. •• The kinds of
data needed to make those decisions are identified. •• Those data are
collected. •• The criteria for determining quality are established. •• The data
are analyzed on the basis of those criteria. •• The needed information is
provided to decision makers. (as cited in Glatthorn, 1987, pp. 273–274)
The context, input, process, product (CIPP) model, as it has
come to be called, has sev- eral attractive features for those interested in
curriculum evaluation. Its emphasis on deci- sion making seems appropriate for
administrators concerned with improving curricula. Its concern for the
formative aspects of evaluation remedies a serious deficiency in the Tyler
model. Finally, the detailed guidelines and forms created by the committee
provide step- by-step guidance for users. The CIPP model, however, has some
serious drawbacks associated with it. Its main weakness seems to be its failure
to recognize the complexity of the decision-making pro- cess in organizations.
It assumes more rationality than exists in such situations and ignores the
political factors that play a large part in these decisions. Also, as Guba and
Lincoln (1981) noted, it seems difficult to implement and expensive to
maintain.
Scriven’s Goal-Free Model
Michael Scriven (1972) was the first to question the
assumption that goals or objectives are crucial in the evaluation process.
After his involvement in several evaluation projects where so-called side
effects seemed more significant than the original objectives, he began to
question the seemingly arbitrary distinction between intended and unintended
effects. His goal-free model was the outcome of this dissatisfaction. In
conducting a goal-free evaluation, the evaluator functions as an unbiased
observer who begins by generating a profile of needs for the group served by a
given program
PART III CURRICULUM
MANAGEMENT362
(Scriven is somewhat vague as to how this needs profile is
to be derived). Then, by using methods that are primarily qualitative in
nature, the evaluator assesses the actual effects of the program. If a program
has an effect that is responsive to one of the identified needs, then the
program is perceived as useful. Scriven’s main contribution, obviously, was to
redirect the attention of evaluators and administrators to the importance of
unintended effects—a redirection that seems espe- cially useful in education.
If a mathematics program achieves its objectives of improving computational
skills but has the unintended effect of diminishing interest in mathematics,
then it cannot be judged completely successful. Scriven’s emphasis on
qualitative methods also seemed to come at an opportune moment, when there was
increasing dissatisfaction in the research community with the dominance of
quantitative methodologies. As Scriven himself notes, however, goal-free
evaluation should be used to complement, not supplant, goal-based assessments.
Used alone, it cannot provide sufficient information for the decision maker.
Some critics have faulted Scriven for not providing more explicit directions
for developing and implementing the goal-free model; as a consequence, it prob-
ably can be used only by experts who do not require explicit guidance in
assessing needs and detecting effects.
Stake’s Responsive Model
Robert Stake (1975) made a major contribution to curriculum
evaluation in his develop- ment of the responsive model, because the responsive
model is based explicitly on the assumption that the concerns of the
stakeholders—those for whom the evaluation is done—should be paramount in
determining the evaluation issues. He made the point this way:
To emphasize evaluation issues that are important for each
particular program, I recommend the responsive evaluation approach. It is an
approach that trades off some measurement precision in order to increase the
usefulness of the findings to persons in and around the program. . . . An
educational evaluation is a responsive evaluation if it orients more directly
to program activities than to program intents; responds to audience
requirements for information; and if the different value perspectives present
are referred to in reporting the success and failure of the program. (p. 14)
Stake recommends an interactive and recursive evaluation
process that embodies these steps:
•• The evaluator meets with clients, staff, and audiences to
gain a sense of their perspectives on and intentions regarding the evaluation.
•• The evaluator draws on such discussions and the analysis of any documents to
determine the scope of the evaluation project. •• The evaluator observes the
program closely to get a sense of its operation and to note any unintended
deviations from announced intents.
A few months ago, I felt so depressed, I got frustrated with bad credits, but then I contacted HACK VANISH by phone: +1 (747) 293-8514 and email: HACK VANISH (@) GMAIL. COM. I found out about hack vanish services through positive reviews read on some credit blogs, in a blink of an eye this great hacker restored my credit score from 509 to 784 in all 3 major credit bureaus, they got removed evictions and foreclosures, my LexisNexis and Chex system was repaired respectively, a few days later I received an email confirming the approval of my mortgage loan, it was quite surprising because I never thought it was possible. Today, I can confidently say that 2021 was a banner year for my husband and I as we now own a new house and a new SUV courtesy of HACK VANISH, I would definitely recommend him to anyone in need of a genuine Hacker.
ReplyDeleteTHE ONLY GENUINE AND GUARANTEED RECOVERY AGENT EVER ( MORRIS GRAY )
ReplyDeleteI was scammed of $573,000 worth of bitcoins with a scam forest investment unknowingly then, I didn’t know what to do.. I felt like committing suicide, but I fortunately for me I had a good start again after my friend introduced me to a cyber crime investigator ( Mr Morris Gray ) who helped me recovered all my stolen funds and profit that where held form me, by those bastard scammers, in just few days of working with him, if you feel scammed with fake forex company you don’t need to be worried, you can Text Him Via his Email: Morris Gray 830@ gmail . com or WhatsApp +1 (607) 698-0239. And he will recover your stolen coins back in your wallet account in full...he has all it takes and his a very straight forward person!!!
I recommend him too anyone out there, who has fall a victim of crypto scam before...