The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

International Journal of Educational Leadership Preparation

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by phasina, 2021-05-21 05:47:46

International Journal of Educational Leadership Preparation

International Journal of Educational Leadership Preparation

NCPEA

International Journal of

Educational Leadership

Preparation

Volume'8,'Number'1' A"na%onally"refereed"journal"sponsored"and"published"by"the""
March'2013' Na%onal"Council"of"Professors"of"Educa%onal"Administra%on"

NCPEA

International Journal of Educational
Leadership Preparation

Spring (March 2013)
Volume 8, Number 1

ISSN 1532-0723

Linda Lemasters, Editor

NCPEA Publications Director, Theodore B. Creighton, Virginia Tech
NCPEA Publications Associate Director, Brad E. Bizzell, Radford University

NCPEA International Journal of Educational Leadership Preparation is a nationally refereed
journal published two times a year, in Spring (March/April), and Fall (September/October) by
the National Council of Professors of Educational Administration. Authors may reproduce
their work for professional use but others must acquire written or oral permission to
photocopy or print any material for any purpose.

ii

Note from NCPEA Publications Director, Theodore Creighton

Beginning with this Volume 8, Number 1 (March 2013) issue of the International Journal of
Educational Leadership Preparation (IJELP), we notify our authors, readers, reviewers, and
the education community at large, that NCPEA will contribute this content to the Open
Education Resources (OER) movement. This contribution to OER will be permanent and
continue through the future.

In August, 2005, NCPEA partnered with Rice University and the Connexions Project, to
publish our IJELP as open and free to all who had access to the Internet. Currently, there are
over 400 peer-reviewed research manuscripts in the NCPEA/Connexions data-base. The
purpose of the NCPEA/Knowledge Base Connexions Project is to “add to the knowledge base
of the educational administration profession” and “aid in the improvement of administrative
theory and practice, as well as administrative preparation programs.” Our partnership
continues but a new door has opened for NCPEA Publications to join the OER movement in a
more substantive and direct way. In March 2013, NCPEA Publications and the NCPEA
Executive Board committed the IJELP to the OER movement.

What are Open Educational Resources (OER)?
Open Educational Resources (OER) are teaching and learning materials that you may freely
use and reuse, without charge. Open Educational Resources are different from other resources
an educator may use in that OER have been given limited licensing rights. That means they
have been authored or created by an individual or organization that chooses to provide access
to all, at no charge. NCPEA Publications is committed to providing access to all, while
assuring author/s of full attribution as others use the material.

The worldwide OER movement is rooted in the idea that equitable access to high-quality
education is a global imperative (and to NCPEA, a moral/ethical responsibility and issue of
social justice). Open Educational Resources, or OER, offer opportunities for systemic change
in teaching and learning through accessible content, and importantly, through embedding
participatory processes and effective technologies for engaging with learning. The OER
Commons project aims to grow a sustainable culture of sharing among educators at all levels.

What is the OER Commons?
The Institute for the Study of Knowledge in Education (ISKME) created OER Commons,
publicly launched in February 2007, to provide support for and build a knowledge base
around the use and reuse of open educational resources (OER). As a network for teaching and
learning materials, the web site offers engagement with resources in the form of social
bookmarking, tagging, rating, and reviewing. OER Commons has forged alliances with over
120 major content partners to provide a single point of access through which educators and
learners can search across collections to access over thousands of items, find and provide
descriptive information about each resource, and retrieve the ones they need. By being
"open," these resources are publicly available for all to use.

iii

What NCPEA OER is Not!!
NCPEA open educational resources are not an open door at the NCPEA Publications
submission and review stages. We have always insisted on and will continue to require very
thorough peer-reviews (double and often triple-blind). NCPEA Publications is fortunate to
have a cadre of professional reviewers (university professors), numbering at approximately
400. Topic Editors first consider a submitted manuscript, and if appropriate content,
selects/assigns two reviewers who also have the expertise/interest in the manuscript’s specific
topic. This process assures that an author’s manuscript will be read by reviewers with
expertise/experience in that area. The IJELP has an approximate acceptance rate of 20%.
This current Volume 8, Number 1 has a 22% acceptance rate.
The “openness” of the IJELP OER comes at publication stage. Once the issues are published,
(as this issue was March 1, 2013), it is formatted/published in an open access website, indexed
by Education Resources Information Center (ERIC), catalogued as a “commendable journal”
in the Cabell’s Directory, and provided to the Open Educational Resource database. The
IJELP is currently viewed and read by educators from over 72 countries (many 3rd World)
and all 50 U.S. States (data provided by Google Analytics).

Read More at: http://www.oercommons.org

"These peer-reviewed manuscripts are licensed under a Creative Commons, Non-Commercial,
No-Derivatives 3.0 license. They may be used for non-commercial educational purposes.
When referring to an article, or portions thereof, please fully cite the work and give full
attribution to the author(s)."

The manuscripts in Volume 8, Number 1 (March 2013) have been peer-reviewed, accepted,
and endorsed by the National Council of Professors of Educational Administration (NCPEA)

as significant contributions to the scholarship and practice of school administration
and K-12 education

iv

International Journal of Educational Leadership Preparation

Prevalence of Evaluation Method Courses in Education 1

Leader Doctoral Preparation
Tara L. Shepperson

Answering the Sustainability Question: A 3 Year Follow-up Report on a Wallace 15

Foundation Training Grant Program and What Did Participants See as Important?
Michael A. Stearns and Lisabeth S. Margulus

Training Programme for Secondary School Principals: 31

Evaluating its Effectiveness and Impact
Disraeli M. Hutton

Principal and School-Level Effects on Elementary School Student Achievement 49

Lantry L. Brockmeier, Gene Starr, Ronny Green, James L. Pate, and Donald W. Leech

21st Century Challenges in Higher Education: Strategic Changes and 62
Unintended Consequences

James W. Beattie, Bill Thornton, Rita Laden, and David Brackett

Alternative Administrative Certification: Socializing Factors 72
Influencing Program Choice

Dana L. Bickmore, Steven T. Bickmore, and Sarah Raines

Knowledge of Effective Educational Leadership Practices 93
Leigh T. Barton

Decision-Making and Problem-Solving Practices of Superintendents 103

Confronted with District Dilemmas
Rene Noppe, Stuart Yager, Carol Webb, and Bridget Sheng

Principal Internships in Indiana: A Promising or Perilous Experience 121
Lynn Lehman

Lessons Learned in Preparing Tomorrow’s Principals to 140
Become Instructional Leaders

David L. Gray and Joel P. Lewis

Fostering a School Technology Vision in School Leaders 144
Jayson W. Richardson, Kevin Flora, and Justin Bathon

Transforming Equity-Oriented Leaders: Principal Residency 161

Network Program Evaluation
Donna Braun, Felice D. Billups, Robert K. Gable

© 2013 by the National Council of Professors of Educational Administration. All rights reserved

v

Prevalence of Evaluation Method Courses in Education
Leader Doctoral Preparation

Tara L. Shepperson
Eastern Kentucky University

This exploratory study investigated the prevalence of single evaluation methods courses in doctoral education
leadership programs. Analysis of websites of 132 leading U.S. university programs found 62 evaluation methods
courses in 54 programs. Content analysis of 49 course catalog descriptions resulted in five categories: survey,
planning and implementation, research and inquiry, leadership and school improvement, special approaches,
and original student research. Most often elective and outside the required curriculum, evaluation methods
appear to hold a consistent but secondary place in doctoral leadership training, despite its applicability in
education.

INTRODUCTION

Research shows that effective school leadership results in more successful students, and that
worldwide, school systems seek leaders with skills to handle the multiple responsibilities of
using data, informing decisions, and making assessments to ensure programs benefit learning
especially at the local or site level (Darling-Hammond, LaPointe, Meyerson, & Orr, 2007;
Lauer, 2006; Pont, Nusche, & Moorman, 2008). Yet, relatively few studies have investigated
research course content (Bustamante & Combs, 2011; Huck, 2008), and even fewer have
examined evaluation methods training in preparation programs (Shepperson & Fierro, 2010).
Pragmatic in nature, evaluation seems valuable to administrators who make decisions about
the utility, effectiveness, and consequences of programs. In an era of accountability and data-
based decisions, assessment, and evaluation would seem important content in the professional
preparation of education leaders.

Therefore, empirical study to examine the scope of research training and specifically
the inclusion of single courses in evaluation in education leader preparation is warranted. This
study sought to address this gap by analyzing evaluation course titles and descriptions in
education leadership doctoral programs across the United States. Course offerings were
explored in 132 doctoral programs at research universities and colleges in 43 states and the
District of Columbia. In all, a total of 62 course titles, and 49 catalog course descriptions,
from 54 programs were analyzed. Implications of findings allowed conclusions about the
relevance or importance of evaluation methods in education leadership preparation programs.

NCPEA International Journal of Educational Leadership Preparation, Vol. 8, No. 1– March 2013
ISSN: 2155-9635 © 2013 National Council of Professors of Educational Administration

1

LITERATURE AND BACKGROUND

Educational Evaluation Defined

Evaluation is a broad field within applied social science in which practitioners collect,
interpret, and communicate information to improve the effectiveness of institutions and
programs (Rossi, Lipsey, & Freeman, 2004). Iconic definitions include Scriven’s (1967)
description of evaluation as a systematic process to determine the quality or value of a
program or product. Practitioners display a range of methods, and work within multiple
arenas, but retain the overarching goal to make programs work better to solve social programs
through an idealized sequence of recognizing a problem, applying alternative solutions,
evaluating new approaches, and adopting those that seem most suitable (Shadish, Cook, &
Leviton, 1991). Working in multiple fields, evaluation has expanded with training programs,
professional associations, research and practitioner journals, and codes of professional
practice guiding evaluation’s growing position as a profession (AEA, 2012; Shadish, 1998).

Educational evaluation in the United States is largely derived from the growth of the
federal role in education which is marked with the 1965 passage of the Elementary and
Secondary Education Act, in response to federal policies for monitoring and assessing student
performance, and the expanded role of state and district education offices to manage
assessment information. Supported by Phi Delta Kappa, The National Study Committee on
Evaluation produced a watershed report acknowledging the inexperience of school
administrators and university professors in evaluation and providing a detailed discussion of
evaluation approaches for education (Stufflebeam, Foley, Gephard, Guba, Hammond,
Merriman, & Provus, 1971). These single method approaches have been built upon at least in
part by expanding accountability requirements and the current Office of Educational Research
and Improvement’s (OERI) gold standard for experimental and quasi-experimental education
research (Rudalevige, 2009).

Accountability and its growing national and global trend represent a large portion of
educational evaluation. Yet, schools regularly conduct small scale evaluations to solve
localized and immediate problems (Greene, 1994; Mertens, 2008; Stake, 2010). This has led
to some innate tension between the science of measurement and more culturally sensitive,
participatory, and natural methods (Ryan & Cousins, 2009). Christie and Klein (2009) argue
that standardized accountability need not preclude local decision-making. Rather, habitual
accountability may breed a climate of review and reflection, increase evaluative practice,
improve a school’s capacity to self-assess, and ultimately advance achievement (Ryan &
Feller, 2009).

Evaluation in Education Literature

Evaluation researchers publish on educational topics and in educational journals. Heberger,
Christie, and Alkin’s (2010) bibliometric study found that educational evaluation retains an
influential position within education literature. When randomly sampled, the cited references
from nine evaluation theorists (Campbell, Rossi, Weiss, Stufflebeam, Patton, Preskill,
Scriven, House, and Eisner) totaled 3,791 articles of which 866, (22.8%) fit within the
category of education and educational research. Among the top 22 journals in which these
select evaluation theorists published, a total of nine were educational journals, including Phi

2

Delta Kappan, Journal of Teacher Education, Journal of Educational Measurement, Alberta
Journal of Educational Research, Teacher College Record, Curriculum Inquiry, and Journal
of Aesthetic Education, and two on education leadership, Educational Administration
Quarterly, and Educational Leadership.

Teaching Evaluation in Education

Research on teaching evaluation within education programs is limited in scope and depth and
much is extrapolated from cross disciplinary studies. A 1986 issue of New Directions for
Program Evaluation was devoted to the teaching of evaluation across many disciplines
including education where it most often related to behavioral testing and measurement in
departments of Educational Psychology (Sanders, 1986). Elsewhere, program evaluation
courses were found across disciplines, generally focused on introduction to the discipline,
design and methodology, and planning and implementation (Connors, 1986; Davis, 1986,
Kronenfeld, 1981; Sanders, 1986). Delivery remained largely lecture-based, sometimes
incorporating practicum or role-playing experiences, but usually limited to one stand-alone
course (Alkin & Christie, 2002; Fierro, n.d.; Trevison, 2004). Learning outcomes were
generally limited to the level of informed consumer rather than proficient user, perhaps with
the intention that awareness might translate into later professional practice (Donaldson, 2007;
Morris, 1994), and in keeping with the notion that professional training, especially at the
graduate level, involves learning what Golde (2007) calls “disciplinary norms and identities”
(p. 344) more than in-depth understanding of evaluation (Stevahn, King, Ghere, & Minnema,
2005). This precedence suggests evaluation was to varying degrees integrated into discipline
content, and that single course offerings remain a viable unit to measure the prevalence of
evaluation training within academic programs.

Recent work by LaVelle and Donaldson (2010) points to a strong association between
evaluation training, schools of education, and university-based graduate programs in
evaluation. The authors found web-based evidence of 48 institutions offering at least two
courses in evaluation at the graduate level, with 35 of these offering a specialization or
concentration in evaluation. A majority, 29 (60.4%) of these were located in schools of
education, of which two were leadership programs. Those departments within schools of
education with evaluation programs mainly conferred doctoral degrees, including 36 with
PhD and 4 with EdD degrees, with one education specialist (EdS) and five master’s (MEd or
EdM) programs. While these associations may not directly indicate evaluation training for
students in educational leadership, it does suggest a continued relationship of evaluation,
psychometrics, research, and graduate education programs.

Education Leader Preparation

There are estimated to be nearly 600 doctoral education leader preparation programs in the
United States (Berry & Beach, 2009; Shoho, 2010). Many programs suffered reputations as
neither scholarly nor providing the practical skills needed by those who would run schools,
districts, and state agencies. Arthur Levine’s (2005) well publicized indictment of educational
administration as a field “rooted neither in practice nor research, offering programs that fail to
prepare school leaders.” (p.61) was one among many criticisms of a preparation system that
was considered at least partly to blame for the poor performance of American public schools

3

(Baker, Orr, & Young, 2007; Fullan, 2007). Historically, leading school administration
journals showed few investigations into leadership preparation (Murphy & Vriesenga, 2004).
That has changed and recent preparation research has focused on practitioner training and
competencies, including alignment with the Interstate School Leaders Licensure Consortium
(ISLLC) and the Education Leadership Constituent Council (ELCC) standards for
Accreditation of Teacher Education (NCATE). Leadership research, growth of professional
organizations, development of standards, and preparation needs had largely benefitted from
major foundation support, which has been prevalent since the 1950s (Murphy,Young, Crow,
& Ogawa, 2009; Milstein, 1993; Orr, Cordeiro, Thome, & Martinez, 2010).

Standards for educational administrators from the Interstate School Leaders Licensure
Consortium (ISLLC), represent an internal process that Murphy (2003) referred to as
“reculturing the profession” (p. 5) from the 20th century school management model into a
contemporary profession focused on students, learning, and teaching (Donaldson, 2001). The
ISLLC standards suggest competencies that could be construed as evaluative, including data-
based decision making, instructional assessment, assurance of effective management and safe
environments, and responsiveness to community interests (ISLLC, 2008; Wright & Gray,
2007). Two examples include under Standard 1, Performance, that states that a school
administrator should ensure that “the vision, mission, and implementation plans are regularly
monitored, evaluated, and revised” (ISLLC, 2008). Under Standard 2, Performance, it reads
that a school administrator has knowledge and understanding of “measurement, evaluation,
and assessment strategies” (ISSLC, 2008). The term evaluation is only listed five times in the
document. Although it is only occasionally explicitly stated, the ISLLC standards indicate
evaluative competencies for effective school leaders and therefore imply their presence in
preparation.

Other research centered on redesign of doctoral programs includes redesigned
programs which are better suited to practitioners (Jean-Marie & Normore, 2010; Walstrom,
Louis, Leithwood, & Anderson, 2010). These are found at a growing number of higher
education institutions, sometimes through the Carnegie Project on the Education Doctorate,
which seeks relevant curriculum and clinical experiences, and a more explicitly divide
between scholarly pursuit of a dissertation and a PhD from experiential problem-solving and
an EdD practitioner degree (Jean-Marie & Normore, 2010; Shulman, Golde, Bueschel, &
Garabedian, 2006).

Gap in Literature

Studies in education leadership competencies and preparation rarely involve discussion of
applied evaluation methods. Absent from existing literature is a clear picture about evaluation
training as a skill to assist leaders in making or supporting decisions (Shadish, 1994). No
information was found in the review of literature about the prevalence of or the frequency
with which students completed single courses of evaluation within preparation programs for
education leaders. Empirical studies that systematically examine curricula within school
leadership programs most often focused on master’s degrees, the customary degree for district
and site administrators. The existing literature describes an environment within education that
promotes training for evidence-based decision making to inform classroom, school, and
system level practices accelerated by accountability policies under No Child Left Behind and
Race to the Top legislation. Administrator credentialing and university program accreditation

4

standards hint to but do not specify that evaluation competencies are required outcomes of
preparation programs, despite the predominance of graduate programs in evaluation within
colleges of education and occasionally within departments that also train education leaders.

THE STUDY

The study was based on a line of thinking that doctoral education leader preparation programs
teach practitioners to use data to ensure effective education programs, problem-solve on
school and systems levels, and understand the impact of policy on practice. No matter how
minute the differences, evaluation training provides competencies and knowledge useful to
school leaders, and it is likely that at least some programs would include single courses in
evaluation methods in their program of study. Just as other research courses present skill sets
and reinforce scientific habits of the mind, evaluation courses also equip graduates to act as
informed consumers; provide strategies for assessing outcomes, impacts, or costs of programs;
and ensure some level of proficiency to use data for decision making.

Four principal research questions guided this inquiry:

1. To what extent are single courses in evaluation prevalent in education leader
doctoral programs?

2. How likely are students to take an evaluation course in a doctoral program?
3. What course content is evident from course titles?
4. What course content is evidence from catalog descriptions?

Sample Selection

Institutions were initially identified through the Carnegie Foundation for the Advancement of
Teaching as high or very high research doctoral-granting institutions. They were cross
referenced against Educational Administration and Supervision programs listing in the
Integrated Postsecondary Educational Data System (IPEDS) College Navigator search engine
of the National Center for Educational Statistics (NCES). Six categories of programs related
to education leadership within the College Navigator drop-down were reviewed for inclusion
in the study. Category headings included administration; elementary, middle, and secondary
principalship; superintendency and systems administration; and urban education leadership.
To verify program status, further university, college, department, and program website
searches provided specific program titles, verification of active status, doctoral degree type
(PhD or EdD), and additional confirmation of PK-12 administrator focus. Cases in which
website information was inconclusive, phone calls were made directly to departments or
registrars to ensure a focus on PK-12 administration at the doctoral level.

The search resulted in a total of 132 programs, located in 43 states and in the District
of Columbia. Programs were defined as a series of courses, seminars, practicum, and other
requirements that prepared a student for school administration positions in PK-12 settings and
led to a terminal degree. Although not restricted to practitioner preparation, programs needed
to include a PK-12 administrator track. Of these, 67 offered Doctorates in Education (EdD),
39 granted Doctors of Philosophy (Phd), 26 offered both EdD and PhD options, and one
institution, Harvard University offered both an EdD and an Educational Doctorate in
Leadership (EDLD). Programs excluded were those solely geared to higher education; those
not currently accepting applicants or discontinued; distance or online programs managed by

5

continuing education; programs in leadership outside the field of education (organizational
leadership, non-profit management); or, programs or concentrations tangential to school
leadership (including measurement and statistics, curriculum and instruction, educational
technology, elementary and secondary education teaching, and subject-specific teaching).

While Educational Leadership was the first and Educational Administration the
second most prevalent title, other common program labels included: Administration of
Elementary and Secondary Education, Administration and Policy Analysis, Administration
and Supervision, Educational Foundations and Leadership, among others. While it is feasible
that graduates from other programs take administrative roles, the central purpose of those
programs was not preparation for administrator training. Rather, focus of the study was on
programs described as preparing students for practitioner positions at school, district, state, or
federal levels.

Data Management, Collection, and Analysis

To identify existing evaluation courses, institution, department, and doctoral program
websites were searched and online documents mined to examine programs of study. Three
major data sets were delineated from online searches and review of programs of study. All
132 programs were reviewed for evidence of single courses in evaluation, based on the
presence of the word evaluation in the title. Courses solely oriented to personnel or teacher
evaluation procedures or policies were not included as outside the parameters of this study.
Lists of courses, programs of study, and department websites also were scrutinized for the
extent to which the course was required, elective, or optional. Microsoft Word and Excel, and
Wordle word cloud technologies were used to aid data management and analyses (McNaught
& Lam, 2010). For this study, the procedures served as worthwhile strategies to recognize
patterns of evaluation training in education leader doctoral programs.

Because a major goal of the study was to identify the relative importance of evaluation
courses within the doctoral programs, in all cases possible frequency analysis was completed,
under the assumption that a course considered valuable would more often be found in
programs of study. Additionally, content analysis included development of categories of
courses that also were counted to establish the relative presence of certain content over other.
Once course titles and catalog course descriptions were transferred into an Excel spreadsheet,
they underwent content analysis using multiple cycle coding. For both titles and descriptions,
first cycle initial and theme coding was first used to categorize entries. For catalog
descriptions, second cycle focused coding was used to distinguish major content elements
(Saldana, 2010).

Study Limitations

Websites were the primary data source, although follow-up phone and/or email verifications
were made in a few cases to obtain course catalog descriptions when not navigable online.
Online data searches have increasingly become an alternative method to records and
document data collection (LeVelle & Donaldson, 2010). Advantages to web-based research
include ease of collection, availability of descriptive and frequency data, access to contextual
information, and cost-efficiency (Russ-Eft & Preskill, 2001). Disadvantages include time
constraints, inaccessibility of some data, incomplete representation of programs, and sample

6

limitations dependent on individual program web presence. Another disadvantage particular to
this study was the fluctuating nature of website redesign and updates. This data must be
considered particularly time-sensitive. Collected in early Spring 2010, updates in websites and
changes in program curricula will have occurred since the study, thus placing limits on ability
to replicate and current accuracy. However, point-in-time research provides a useful first step
to identifying the prevalence of stand-alone courses and perceived importance of evaluation as
a competency in doctoral education leadership programs.

FINDINGS

Finding 1: Prevalence of single evaluation courses

Of the 132 leadership programs investigated, fewer than half, or 54 (40.9%) had evaluation
methods courses. In all, 49 (90.7%) programs were found to have just one evaluation methods
course, although five programs (9.2%) listed two separate courses, and one program (1.8%)
had a three course sequence entitled educational research and evaluation.

Finding 2: Likeliness of students to take evaluation courses

In 54 programs, a total of 62 evaluation courses were identified and program websites were
investigated to see whether the courses were required. The smallest group, 16 (25.8%) were
required courses in the program of study. Another 18 (29%) appeared to be electives among a
short list of possible selections with at least a 50% chance of being selected. The largest
number, 28 (45.2%) appear to be optional, one among many possible program elective
courses. Required courses were usually found in online brochures or programs of study,
within a list of courses to be taken. Elective evaluation courses were on a list of electives that
indicated a reasonable chance of selection, for example one of six elective courses in which
three courses must be taken. More commonly, evaluation courses were listed among a long
list of electives or other class options statistically far less likely to be selected, for example a
list of six course electives in programs in which students might normally select only one or
two courses or substitute other options.

Finding 3: Course content and titles

In general, course titles specified education, administrative functions, or research. The
evaluation courses most often specifically referred to education, schools, learning
organizations, or curriculum in 37 (59.7%) of the cases. Besides a focus on education, the
titles were organized into three categories as shown in Table 1. Of the 62 courses, the largest
group of 25 (40.3%) titles included the words program evaluation, with headings such as
Curriculum and Program Evaluation, Educational Organizations and Programs, and
Implementation and Evaluation of Programs. A group of 21 (33.9%) courses were
categorized as administration courses with varied titles, such as Administrative Decision
Making; Planning, Research, Evaluation for School Leaders; and Evaluation of Educational
Products and Systems. A smaller third group of 16 (25.8%) titles were research courses, with
titles such as Evaluation Research and Measurement, Evaluation Models and Techniques, or
Methods of Evaluation.

7

Those titles that mentioned research, methods, models, or design, were generally less clear
about the type of research, although quantitative research and statistics were suggestive for

five (8.1%) courses which had the words measure or measurement in the title.

Table 1

Content Analysis of Evaluation Course Titles

Theme n % Example Course Titles

Program Evaluation 25 40.3 Educational Program Evaluation

Program Planning and Evaluation

Administration 21 33.9 Evaluation in Educational Administration

Systematic Evaluation

Research 16 25.8 Evaluation Research & Measurement

Evaluation Methods

(n=62)

Finding 4: Course content and catalog descriptions

Of the 62 identified evaluation courses, catalog descriptions for 49 courses were located,
analyzed, and coded into the following themes: (a) survey to familiarization of approaches
and uses, (b) designs to plan and conduct evaluations, (c) evaluation as research and inquiry,
(d) evaluation for leading and improving educational institutions or programs, (e) specialized
approaches in evaluation, and (f) original student evaluative projects. As shown in Table 2,
the largest group, 17 (34.7%) focus on broad familiarity with evaluation models and uses in
education. The second largest category of courses, 10 (20.4%) provided instruction on
planning and conducting evaluations. Another 6 (12.2%) courses were oriented to using data
to make decisions in educational institutions. Another 8 (16.3%) courses mentioned research
and inquiry and both quantitative and qualitative methods in evaluation. A small group of 5
(10.2%) courses were labeled specialized approaches. Two Louisiana institutions mentioned
state certification guidelines focused on the standards adopted by national professional
evaluation organizations. Another two referenced organizational behavior and environmental
and political influences. One institution’s course focused on international education policy,
monitoring, and evaluation. The smallest category 3 (6.1%) of courses specified independent
student projects or completion of an original program evaluation.

Among the 49 course descriptions analyzed, there were a few mentions of specific
evaluation strategies, including systems theory, needs assessments, performance
measurements, or school self-studies. In no case did a course description focus entirely around
politics and policies, although it was mentioned in two cases. There was only one instance that
varied school stakeholders and local communities were indicated. In several cases,
experimental, quasi-experimental, and statistical analyses were targeted. Only once was the
term accountability found in course descriptions. Interestingly, there appeared no link between
practical application courses and EdD programs or between more theoretical content and PhD
programs.

8

Table 2
Content Analysis of Evaluation Catalog Course Descriptions

Theme Example Descriptions from Catalog Frequency
17 (34.7%)
Survey, Overview “reviews theories,…designs, analysis, current trends”
“introduction to concepts, approaches, techniques”

“history, state of the art, frameworks”

Plan, Conduct “knowledge and skills to plan and conduct” 10 (20.4%)
“emphasis on needs assessment, school self-study”

“charting course, assessing progress…desired outcomes”

Research, Inquiry “emphasis on quantitative methodology” 8 (16.3%)
“naturalistic and empirical methods and procedures”

“action research…empirically evaluating”

Lead, Improve “use data for decision-making purposes” 6 (12.2%)
“enable administrator to develop, implement, evaluate”

“application to educational progress”

Specialized “affect organizational behavior, ethical considerations” 5 (10.2%)
“meet standards of National Joint Committee”

“environmental practical factors influencing design”

Student Projects “student develops and carries to completion…study” 3 (6.1%)
(n = 49) “design, conduct, report a real program evaluation”

“prepare and present designs or program evaluation”

DISCUSSION

This exploratory study was designed in response to the growing interest in reforming
preparation programs to better prepare education leaders. In this study, classic content and
frequency analyses was combined with web-based technology searches to investigate the
actual level of importance of courses in evaluation within leading doctoral programs,
nationwide. Programs of study, course titles, and catalog descriptions all provided meaningful
information to begin to understand whether the professionalization of evaluation, growth in
accountability requirements, and interest in real world applicability of leadership preparation
was shown by the extent of evaluation training in leadership doctoral programs.

The discovery of single courses in about half of the doctoral programs investigated
suggests that overall evaluation is not a core focus. That half of the programs offered
evaluation, however, indicates that evaluation was considered viable and valuable in

9

education leadership training. While the perceived importance of evaluation methods in
specific programs remains unclear, patterns emerging from the data suggest that evaluation
appears to be rather consistently taught among doctoral programs in education leadership,
although not universally required.

In education, it is difficult to define evaluation, whether the actual processes are
assessing student outcomes, appraising teacher performance, analyzing instruction or
curriculum, distinguishing program outcomes, or reviewing school milieu (i.e. climate
surveys). Evaluation is a term that may reflect any or all of these activities and more
(Schwandt, 2009). Therefore, it is not surprising that about half of the education leadership
doctoral programs investigated had courses including the term evaluation. Closer examination
of both course titles and catalog descriptions supports the somewhat amorphous application of
evaluation methods in education. Titles seem nearly evenly divided among evaluation of
programs, administrative uses, and research perspectives. Catalog descriptions leaned towards
general survey courses with a broad smattering of other learning objectives, most with clear
educational foci. In this sample of education leader preparation programs, evaluation courses
are about linking inquiry to feedback on school and classroom functions and outcomes.

Results from this study are pertinent to research on education leader preparation and
reform. The discovery that many programs offered at least one evaluation course, usually
related to issues in education, and sometimes decision making and research, indicate that
evaluative competencies found in other professional and clinical fields may also relevant in
education (King, Stevahn, Ghere, & Minnema, 2001). Future studies may seek to delve more
deeply into syllabi, texts, and class assignments to better understand course objectives, and
whether evaluation is situated more around collecting information, following existing
accountability requirements, or making administrative decisions. Of key interest would be
whether courses are similar to those investigated in the 1980s, more about informed consumer
than skilled user, or if courses reflect current accountability and administrative realities. Also,
this study did not link evaluation courses with other research and method courses or with the
entirety of programs of study which might explain more clearly the place of evaluation
content within preparation programs.

Other implications for future research include a need to sample programs through
varied lenses. With the current criticism that preparation is not meeting the needs of on-the-
job practitioners, the intense national climate of accountability, and the perception that school
leaders play a vital role in refurbishing American education, it seems that evaluation offers
some tools useful to future education leaders, but the convention that evaluation is taught as a
single or series of single courses somewhat adjacent to the main curriculum appears to hold
true. Future studies of newly redesigned programs, could serve to better understand the actual
content and learning objectives.

It is not only leading research universities and those involved in foundation-supported
programs that are revamping programs of study for education leaders, many public, private,
and for-profit institutions increasingly focus degree and certification programs on professional
skills and clinical experiences. The bulk of practitioner training takes place at regional and
other institutions, many of which have practitioner programs focused on real world
application more than traditional academic structure. A look at regional universities, private
institutions, and other programs that train practitioners may reveal different results from those
as leading academic institutions.

10

The proximity to strong evaluation and psychometric concentrations might provide
interesting comparative case studies into whether evaluation competencies are more keenly
integrated in institutions or colleges of education with a psychometric training history. These
more selective studies may reveal underlying beliefs, context, and content of evaluation
courses and provide information about what institutional or programmatic characteristics and
influences lead to inclusion of evaluation in doctoral programs.

Accountability overshadows current discussion of educational evaluation. We are in an
era when educational policies require school leaders to collect and analyze information to be
used to monitor and improve educational programs. There is a public call and increased
professional scrutiny for graduate programs in education leadership to produce candidates
with these evaluative competencies. Understanding how leading doctoral programs interpret
the need for evaluation methods courses offers a valuable piece in understanding what skills
are taught and considered important as programs evolve and produce future school leaders.

REFERENCES

Alkin, M. C., & Christie, C.A. (2002). The use of role play in teaching evaluation. American
Journal of Evaluation, 23, 209-218.

American Evaluation Association. (2012). Guiding principles for evaluators. Retrieved from
http://www.eval.og/Publications/GuidingPrinciples.asp

Bustamante, R M, & Combs, J. P. (2011). Research courses in education leadership programs:
Relevance in an era of accountability. International Journal of Education Policy and
Leadership 6(3). Retrieved from http://www.ijepl.org

Baker, B., Orr, M. T., & Young, M. D. (2007). Academic drift, institutional production and
professional distribution of graduate degrees in educational leadership. Educational
Administration Quarterly, 43(3), 279-318.

Berry, J., & Beach, R. (2009, July 9). K-12 Leadership and the Educational Administration
Curriculum: A Theory of Preparation. Retrieved from
http://cnx.org/content/m13772/1.2/

Christie, C. A, & Klein, B. M. (2009). Developing a community of practice: Learning and
transformation through evaluation. In K. E. Ryan & J. B. Cousins (Eds.), The Sage
International Handbook of Educational Evaluation (pp. 357-373). Thousand Oaks,
CA: Sage.

Conner, R. F. (1986). The teaching of evaluation in interdisciplinary programs: UC Irvine,
New Directions for Program Evaluation, 29, 61-69.

Darling-Hammond, L., LaPointe, M., Meyerson, D., & Orr, M. (2007, April). Preparing
school leaders for a changing world: Lessons from exemplary leadership development
programs. Palo Alto, CA: Stanford Educational Leadership Institute.

Davis, B. G. (1986). Overview of the teaching of evaluation across the disciplines. New
Directions for Program Evaluation, 29, 5-14.

Donaldson, G. A., (2001). Cultivating leadership in schools: Connecting people, purpose, and
practice. New York: Teachers College Press.

Donaldson, S.I. (2007). Program theory-driven evaluation science: Strategies and
applications. New York: Lawrence Erlbaum Associates.

Fierro, L. A. (n.d.). The status of evaluation training in schools and programs of public
health. Unpublished report, Claremont Graduate University.

11

Fullan, M. (2007). The new meaning of educational change, (4th ed.). New York: Teachers
College Press.

Golde, C. M. (2007). Signature pedagogies in doctoral education: Are they adaptable for the
preparation of education researchers? Educational Researcher, 36, 344-351.

Greene, J. C. (1994). Qualitative program evaluation: Practice and promise. In N. K. Dentin &
Y. S. Lincoln (Eds.), The handbook of qualitative research (pp. 530-544). Thousand
Oaks, CA: Sage.

Heberger, A. E., Christie, C. A., & Alkin, M. C. (2010, March). A bibliometric analysis of the
academic influences of and on evaluation theorists’ published works. American
Journal of Evaluation, 31(1), 24-44. doi: 10.1177/1098214009354120

Huck, S. W. (2008). Reading statistics and research (5th ed.). Boston, MA: Pearson.
Interstate School Leaders Licensure Consortium. (2008). Educational Leadership Policy

Standards: ISLLC 2008. Washington, DC: Council of Chief School State Officers.
Retrieved from
http://www.ccsso.org/projects/education_leadership_initiatives/ISLLC_Standards/
Jean-Marie, G., & Normore, A. H. (Eds.) (2010). Educational leadership preparation:
Innovation and interdisciplinary approaches to the Ed.D. and graduate education.
New York: NY: Palgrave MacMillan.
King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a taxonomy of essential
evaluator competencies. American Journal of Evaluation, 22(2), 229-247.
Kronenfeld, J.J. (1981). Having students conduct evaluations: Merging field experience and
academic learning. Evaluation News, 2, 375-378.
LaVelle, J.M., & Donaldson, S. I. (2010, March). University-based evaluation training
programs in the United States 1980-2008: An empirical examination. American
Journal of Evaluation, 31(1), 9-23.
Lauer, P. A. (2006). What principals need to know about education research. Principal, 85(5),
12-17.
Levine, A. (2005). Educating school leaders. Washington, DC: The Education School Project.
McNaught, C., & Lam, P. (2010). Using Wordle as a supplementary research tool. The
Qualitative Report, 15(3), 630-643. Retrieved online from
http://www.nova.edu/sss/QR/QR15-3/mcnaught.pdf
Mertens, D. M. (2008). Transformative research and evaluation. New York, NY: The
Guilford Press.
Milstein, M. & Associates. (1993). Changing the way we prepare educational leaders: The
Danforth experience. Newbury Park, CA: Corwin.
Morris, M. (1994). The role of single evaluation courses in evaluation training. New
Directions for Program Evaluation, 62, 51-59.
Murphy, J. (2003, September). Reculturing educational leadership: The ISLLC standards ten
years out. Fairfax, VA: National Policy Board for Educational Administration.
Retrieved from http://www.npbea.org/Resources/ILLC_10_years_9-10.pdf
Murphy, J., Young, M. D., Crow, G. M., & Ogawa, R. T. (2009). Introduction: Exploring the
broad terrain of leadership preparation in education. In M. D. Young, G. M.Crow, J.
Murphy, & R. T. Ogawa (Eds.), Handbook of research on the education of school
leaders (pp. 1-22). New York, NY: Routledge.

12

Murphy, J., & Vriesenga, M. (2004). Research on preparation programs in educational
administration: An analysis. Columbia, MO: University Council for Educational
Administration.

Orr, M. T., Cordiero, P., Thome, R., & Martinez, R. L. (2010). Wallace Workshop II:
Developing purposeful and coherent leadership preparation curriculum. Presentation
at the University Council for Educational Administration 2010 Convention, October
31, 2010, New Orleans, LA.

Pont, B., Nusche, D., & Moorman, H. (2008). Improving school leadership, Volume I: Policy
and practice. Paris, France: Organization for Economic Cooperation and Development
(OECD).

Rossi, P. H., Lipsey, M. W., Freeman, H. E. (2004). Evaluation: A systematic approach (7th
ed.). Thousand Oaks, CA: Sage.

Rudalevige, A. (2009, Winter). Juggling act: The politics of education science. Educationnet
9(1). Retrieved online at: http://educationnext.org/juggling-act/

Russ-Eft, D., & Preskill, H. (2001). Evaluation in organizations: A systematic approach to
enhancing learning, performance, and change. New York, NY: Basic Books.

Ryan, K. E., & Cousins, J. B. (2009). Introduction. In K. E. Ryan & J. B. Cousins (Eds.), The
Sage International Handbook of Educational Evaluation (pp. ix-xvii). Thousand Oaks,
CA: Sage.

Ryan, K. E., & Feller, I. (2009). Evaluation, accountability, and performance measurement in
national education systems: Trends, methods, and issues. In K. E. Ryan & J. B.
Cousins (Eds.), The Sage International Handbook of Educational Evaluation (pp. 171-
189). Thousand Oaks, CA: Sage.

Saldana, J. (2009). The coding manual for qualitative researchers. Thousand Oaks, CA: Sage.
Sanders, J. R. (1986). The teaching of evaluation in education. New Directions for Program

Evaluation, 29, 15-27.
Schwandt, T. A. (2009). Globalizing influences on the western evaluation imagery. In K. E.

Ryan & J. B. Cousins (Eds.), The Sage International Handbook of Educational
Evaluation (pp. 19-36). Thousand Oaks, CA: Sage.
Scriven, M. (1967). The methodology of evaluation. In R. ,W. Tyler, R. M. Gagne, & M.
Scriven (Eds.), Perspectives of Curriculum Evaluation, (pp. 39-82). Chicago, IL: Rand
McNally
Shadish, W. R. (1994). Need-­‐based evaluation theory: What do you need to know to do good
evaluation? American Journal of Evaluation, 15(3), 347-­‐358.
Shadish, W. R. (1998). Evaluation theory is who we are. American Journal of Evaluation,
19(1), 1-19.
Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation:
Theories of practice. Newbury Park, CA: Sage.
Shepperson, T., & Fierro, L. A. (2010). Teaching evaluation: Cross-disciplinary research on
graduate training in public health and educational administration. Paper presented at
the Hawaii International Conference on the Social Sciences, Honolulu, Hawaii, June
2-5, 2010.
Shoho, A. R. (October, 2010). Presidential address. Presentation at the University Council on
Educational Administration convention, New Orleans, LA, October 29, 2010.
Shulman, L. S., Golde, C. M, Bueschel, A. C, & Garabedian, K. J. (2006). Reclaiming
education’s doctorates: A critique and proposal. Educational Researcher, 35(3), 25-32.

13

Stake, R. E. (2010). Qualitative research: Studying how things work. New York, NY: The
Guilford Press.

Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential
competencies for program evaluators. American Journal of Evaluation, 26, 43-59.

Stufflebeam, D. L., Foley, W. J., Gephard, W. J., Guba, E. G., Hammond, R. L., Merriman,
H. O., & Provus, M. M. (1971). Educational Evaluation and Decision Making.
Bloomington, IN: Phi Delta Kappa, Inc.

Trevisan, M. S. (2004). Practical training in evaluation: A review of the literature. American
Journal of Evaluation 25, 255-272.

Walstrom, K. L., Louis, K. S., Leithwood, K., & Anderson, S. E. (2010). Investigating the
links toimproved student learning: Executive summary of research findings. Learning
From Leadership Project. New York, NY: The Wallace Foundation.

Wright, G., & Gray, N. D. (2007, October). The ISLLC standards: A unifying force in school
administrator and counselor preparation. Paper presented at the Association for
Counselor Education and Supervision Conference, Columbus, OH. Retrieved from
http://74.125.93.132/search?q=cache:jzpkXCDr7v0J:counselingoutfitters.com/vistas/v
istas09/WrightGray.doc+wright+gene+isllc+and+counseling&cd=1&hl=en&ct=clnk&
gl=us

14

Answering the Sustainability Question: A 3 Year Follow-up
Report on a Wallace Foundation Training Grant Program and

What Did Participants See as Important?

Michael A. Stearns
Lisabeth S. Margulus
Grand Valley State University

In July 2008, Riverbend Public Schools (RPS) in Western Michigan and Grand Valley State University (GVSU)
received a grant from the Wallace Foundation that supported the development of a specialized educational
leadership program. The project was designed to customize an existing degree program in the university’s
College of Education, focusing on leadership skill sets for urban school leaders. The project, titled the “Aspiring
Leaders Program,” allowed 34 urban teachers and new principals to obtain a master’s degree in educational
leadership or an educational specialist degree in educational leadership with special expertise in urban schools.
The program ran from November 2008 through the fall semester of 2009. In the late fall of 2009 and again in
2012, follow-up studies were conducted to determine if this customized program had benefited the participants
and if they continued to use the skill sets they had been taught. This study describes the 2012 three-year follow-
up study and discusses its results.
.

BACKGROUND

The funding source for the original project was the Wallace Foundation. Based in New York
City, the Wallace Foundation is a national philanthropy that seeks to improve education and
enrichment for disadvantaged children. The foundation has an unusual approach: funding
projects to test innovative ideas for solving important social problems, conducting research to
find out what works and what doesn’t, filling in key knowledge gaps, and then
communicating the results to help others (Wallace Foundation, 2013). Hence, the foundation
has a major interest in urban education initiatives.

RPS is the largest urban school district in West Michigan. Typically, 85.2% of its
students qualify for free and reduced lunch status. Minority students make up 86% of its
student body. The administration of RPS was an enthusiastic partner in this project. Their goal
was to have a pool of highly trained principals they could place in leadership positions
throughout their school district. GVSU has had a long history of partnering with RPS on a
wide variety of training projects, and the university’s commitment to urban education made it
a logical choice for this project. Thus, the stars aligned and the Wallace Foundation, GVSU,
and RPS formed a grant-funded partnership to train a cohort of specially chosen urban
educators in educational leadership.

NCPEA International Journal of Educational Leadership Preparation, Vol. 8, No. 1– March 2013
ISSN: 2155-9635 © 2013 National Council of Professors of Educational Administration

15

While some Wallace Foundation grant-funded training programs across the country
have relied heavily on existing university coursework, GVSU took a different approach.
As Angelle, Wilson, and Mink (2011) have noted, “With heightened emphasis on school
leadership and the call for greater accountability, leadership preparation programs must
evolve to meet the needs of today’s principals” (p. 39). GVSU acknowledged this challenge to
meet the needs of today’s leaders, especially urban principals, and took a bold step in
customizing a degree program for them.

The Program

As GVSU officials and RPS administrators began planning the program in the late summer
and early fall of 2008, they felt it was important to select university professors who had
administrative experience in large urban K-12 settings. It was also recommended that a team
of three professors collaborate to develop classroom experiences that integrated theory and
practice. The three professors chosen had significant urban leadership backgrounds. One was
an experienced director of special education, one had served as a high school principal, and
the third had been a principal in both elementary and middle schools. All three professors had
doctoral degrees with a strong emphasis on integrating theory and practice into authentic
work, and all were thoroughly trained in systems thinking and change processes.

The 34 program participants were selected in fall 2008 by RPS officials based on their
demonstrated leadership ability and the likelihood that they would be chosen as a principal
within the school district. While they were all urban educators, the 34 individuals had
different backgrounds and held a wide variety of positions throughout the district. The cohort
included four principals, five assistant principals, a dean of students, a math coach, a language
coach, a curriculum coordinator, a school reform specialist, a public safety officer, a youth
advocate, a physical therapist, and 17 classroom teachers. Sixteen of the 34 participants were
female, and 18 were male. Their experience in education ranged from three to 27 years.
Nearly all had spent the majority of their professional educational careers in the RPS school
district. As a cohort, some of the participants knew other members of the group, but most had
only a passing knowledge of the other members’ professional assignments or personal lives. It
was obvious to the team of professors that camaraderie needed to be developed for risk-free
sharing of experiences to occur.

A customized degree program and accompanying planned program was developed for
each participant in the Aspiring Leaders Program. This process began with individual
interviews. Participants were asked questions such as, “What do you feel will be your biggest
challenges as an urban principal new to the position?” The program development team then
took each participant’s input, correlated his/her needs with state standards and school district
goals, and developed a degree program that contained the skill sets projected to be needed for
career success (see page 9 for program overview and literature review).

As part of the customization process, the GVSU professors also created a classroom
delivery system that integrated research with authentic applications. A typical class session
would include a review of the literature with a case study. Students were asked several open
ended questions about the problem presented in the case study and then worked on solutions
in groups. This was followed by discussion, typically in Socratic style, with a lively
interchange about solutions to the problem. Below is a sample planning matrix that the
professors used as a road map to providing the information that the participants saw as

16

important. The matrix also provides direction on planning, classroom activities and
assessment.

ASPIRING LEADERS INSTRUCTIONAL PLANNING
Prepared by Dr. Michael Stearns 3/4/09

Focus What do we want Resources Activities Time Assessments Ed. Spec.
Area/Standard candidates to Research
know:
Competencies

Standard 1.4b Data collection Sample ACT Discuss 2 hrs. Group plans Research Data
Vision methods, chart paper expectations Trends of a
Stewardship use understanding for students Candidate targeted school
of data student GRPS mission reflection & design a long
performance data, statement Take sample paper term
using data to assess ACT Process intervention
progress toward the The Data Wise results plan–refer to
district’s mission Improvement Effort Based Ed.
Process— Discuss how
Article from to use testing
The Harvard data to plan
Newsletter for
instructional
support

Standard 2.4a Knowledge of adult List of “high Article on 2 hrs. Individual Explore the
Designing issues of adults
comprehensive learning strategies priority” issues adult growth plan as learners with
growth plans a focus on
Use of authentic currently faced learning— highlighting generational
issues. 10 pg.
problems and tasks, by principals in jigsaw learning PD paper required

to generate new GRPS Application of needs

problem solving NCREL— problem
skills Balanced
solving

Leadership— techniques to

Marzano et.al. current

challenges

Classes began during winter semester 2009 and continued through the spring/summer
and fall semesters that calendar year.

Follow-up Studies

In late fall 2009, when most of the participants had completed the majority of their core
courses, they were interviewed as part of a formal research project analyzing the program.

17

Then a three-year follow-up survey was developed in 2012 and sent to all of the participants
from the original Aspiring Leaders Program who were still employed by the school district.

To understand the importance of this 2012 follow-up study, one must consider the
context in which the original cohort of participants worked. They were teachers or newly
appointed administrators in an urban system. As urban educators, they faced many challenges.
According to Voltz (1998), the challenges that have the greatest impact on the education of
urban youth include poverty, violence, home-school communication, teacher preparedness,
cultural incongruence (e.g., predominantly white faculty teaching predominantly students of
color), relevant curricula, and diversity awareness. The original Aspiring Leaders Program
sought to give the participants skill sets that would equip them to deal with these challenges.

The initial interviews in 2008 with the participants had indicated their concerns. These
included, in order of perceived importance:

1. Implementing a vision
2. Conducting teacher evaluations
3. Conducting productive meetings
4. Understanding the dynamics of change
5. Resolving conflicts
6. Understanding generational differences in staff
7. Dealing with diversity issues (e.g., race, age)

Because the participants were degree-seeking students, the professors had to account for
official state standards as they customized the participants’ individual courses of study. With
all of these competing requirements and perceptions, the first follow-up study in fall 2009
sought to determine whether the skill sets taught actually addressed the participants’ primary
concerns. The 2009 study also attempted to determine if they were beginning to use the
administrative skill sets they had learned. The responses from the participants in the Aspiring
Leaders’ Program were overwhelming positive. The participants felt the program did indeed
prepare them for roles as principals. Determining if this positive response continued and
participants were indeed using the skills taught in the program in their current positions
answers the question of sustainability in this training program. Thus, it was clear to the
professors/researchers that the 2012 follow-up study must focus on the same areas to make a
comparison possible and determine sustainability.

Purpose

The goals of the 2012 follow-up study were to: (a) determine which skill sets were most
useful to the cohort members, (b) determine if the skill sets mastered in the 2008-2009
program were still being used by the participants in their current professional roles, and (c)
provide examples of leadership strategies that could be utilized by current school leaders and
professors of educational leadership who are training future urban school leaders.

Theoretical Framework

The professors relied on 3 theoretical frameworks to ground their work with the Aspiring
Leaders Program. They included: 1) change theory, 2) leadership theory, and 3) adult

18

learning theory. The work of Wagner (2006), Senge (1990), and Fullan (2001) was reviewed
and shared with the participants. Participants designed a “preferred future” and assessed the

current reality in a selected school. They discussed the issues of “mental models,” “team
learning,” and “moral purpose.” One of their final assignments was to create their own theory

of change. In preparing the participants for leadership roles, they discussed the components
of leadership theory (Northouse , 2007) including: transformation leadership and authentic

leadership. The professors agreed they needed to have a deep understanding of adults as
learners to be highly successful with the cohort group. Knowles, Holton III, and Swanson

note that “Adults are interested in their learning when they perceive that it will increase their
ability to deal with problems that they face in their work situations (2005).” Authentic

situations and problems solving activities became the model of classroom work as the
participants came to believe in their ability to solve problems.

METHODOLOGY

Survey and Respondents

To determine if the skill sets contained in the original customized training program were
useful and still being used three years later, a qualitative survey was designed to gather input

from the program participants. The survey was sent via mail or e-mail to 24 of the 34 original
participants; 10 of the 34 had left the school district.

The survey included the following 10 open-ended questions:

1) Discuss how your knowledge of the dynamics of change has benefited you in your role
as principal.

2) Describe the specific coaching and evaluation tools your have used with your staff.
3) How have you used your previous experience in developing a vision speech in your

current work assignment?
4) What are the specific coaching and evaluation tools that you have used with your staff

and how have they been helpful in improving teachers’ classroom delivery?
5) What generational issues have your encountered?

6) Describe how you begin your staff meetings to ensure success.
7) How have you relied on the conflict resolution skills taught in the Aspiring Leader’s

Program?
8) What skills and knowledge that you learned have contributed to your ability to do your

current assignment?
9) How and in what ways did the ongoing availability of the professors provide you with

additional supports and skill development beyond the timeframe of the program?
10) What learning experiences did you count as most valuable from the Aspiring Leaders’

Program that prepared you to be an urban school leader?

In essence, the survey was trying to ascertain the answers to some basic program questions.
What aspects of the coursework and informational presentations were still being used by the

participants as they moved into their professional leadership roles? Which of the skill sets, if
any, had become integrated into their everyday leadership practices?

Responses were received from 12 of the surveyed participants (35%). Three of the
respondents were male, and nine were female. One respondent was a fourth-grade teacher

19

with additional responsibilities as a building leader. Nine were principals, and two were
assistant principals. Ten of the respondents had finished their degree; two were still in
process. The respondents’ demographics represented a fair sample of the original cohort. This
made a comparison of the responses from the 2009 study and the follow-up (2012) study
possible.

The researchers believe that the exceptionally positive rate of survey return (35%) was
directly linked to ongoing support offered during the previous three years. The GVSU
professors who taught in the program have maintained working relationships with the original
program participants. Thus, they were able to encourage the participants to respond to the
survey.
Analysis
A qualitative analysis of the returned surveys was completed by an independent research
consultant hired by the university. This analysis led to a qualitative report that correlated the
original cohort’s data and the skill sets covered in the original program with the responding
participants’ current practices. Triangulation of data was possible through a review of original
program documents, including a standards/class activity matrix and daily classroom feedback
sheets. Below is an excerpt from the course evaluation with comments from the participants:

20

Table 1. Aspiring Leaders Program Overview/Literature Review

Skill Sets Taught and Mastered Research / Theoretical Basis Tasks

1. Developing, presenting, and Kouzes, J., & Posner, B. (2002, Students were required to create and
present a vision statement to their
implementing an effective vision for 2003) school staff

schools

2. Using and analyzing data to Bernhardt, V. (2000) Students were given hypothetical
improve student learning data sets of schools; they had to list
goals related to the data

3. Coaching teachers for improved DuFour, R., & Marzano, R. (2009) Students were required to
student achievement demonstrate an effective
professional learning community
(PLC) program implementation

4. Evaluating instruction and Marzano, R., Pickering, D., & Students reviewed several tapes of
demonstrating effective teaching Pollack, J. (2001) teachers teaching and discussed with
Marzano, R. (2003) colleagues the effectiveness of what
Marzano, R., Waters, T., & they observed; the goal was to be
McNulty, B. (2005) consistent in their evaluations
Danielson, C. (2007)

5. Collaborating with others Kouzes, J., & Posner, B. (2002, Students were given hypothetical
effectively 2003) problems most likely to be faced by
urban principals; as part of a team,
they were required to identify and
suggest solutions

6. Leading effective meetings Garmston, R., & Wellman, B. Students role-played leading
(2009) meetings, using information they
had learned about leading
productive groups

7. Understanding the dynamics of Wagner, T., et al. (2006) Students outlined a change initiative
change within the school they would implement in a school
environment and provided the communication
pieces necessary for this change
process to be successful

8. Managing conflict resolution Johnson, B. (1996) Students created a polarity map
through polarity management related to diverse ways of managing
an issue in their schools

9. Understanding generational Zemke, R., Raines, C., & Filipczak, Students discussed at length the
differences R. (2000) generational differences they might
encounter in leading a building and
discussed how these differences
might be a positive and a negative
force in a school

10. Developing school improvement Senge, P. (1990) Students were required to develop a
plans
strategic plan for school
improvement, goals/activities

During the analysis, to adequately compare skill sets mastered in 2008-2009 and those
being used in 2012, it was necessary to list the skill sets presented by the professors in 2008-

21

2009. Table 1 summarizes the leadership skill taught in the original program, required tasks,
and references for the theoretical models and literature review that supported these activities.

PARTICIPANT ACTIVITIES AND RESPONSES

In 2009, all of the participants interviewed for the original study were very enthusiastic about
the Aspiring Leaders Program. They felt that all of the content was aligned with the work they
would engage in as urban leaders. They were particularly grateful for the cohort group design
that complemented their individual degree work, since this allowed them to network with
other current and upcoming leaders — creating their own community of practice within the
district. They also felt that the GVSU faculty members were very responsive to their needs,
adjusting the customized curriculum to ensure it was a good fit for the challenges they faced.

In the 2012 follow-up study, a similar pattern emerged. Only one respondent reported
frustration with implementing the practices taught, and this individual was seen as an outlier
in the data. All of the remaining respondents overwhelmingly reported positive use of the skill
sets they had mastered in 2008-2009. Table 2 details the responses by skill set of the 12
participants who responded to the 2012 survey. “Yes response only” indicates that the
participant had used the identified skill set. “Yes response with comments” indicates that the

Table 2. Survey Respondents’ Replies by Skill Set Learned

Skill Set “Yes” Response “Yes” Response, “No” Response “No” Response,
With Comments
Only With Comments Only 0

1. Developing, presenting, 0 11 1 0

and implementing an 1

effective vision for 3

schools 0
0
2. Using and analyzing 3 8 1 0

data to improve student 0

learning 1
0
3. Coaching teachers for 2 8 1

improved student

achievement

4. Evaluating instruction 2 7 0

and demonstrating

effective teaching

5. Collaborating with 1 9 2

others effectively

6. Leading effective 3 8 1

meetings

7. Understanding the 0 10 2

dynamics of change

within the school

environment

8. Managing conflict 2 9 1

resolution through polarity

management

9. Understanding 4 6 1

generational differences

10. Developing school 2 7 3

improvement plans

22

participant had used the skill set and gave at least one specific example of how he/she utilized
the skill. “No response only” indicates that the participant had not used the skill set in his/her
position. “No response with comments” indicates that the participant had not used the skill
and made at least one explanatory comment.

It is important to note that the responses to the 2012 survey correlate by title of skill
set to the oral interview questions used in the 2009 research project. For example, in 2009
“developing and implementing a vision for their schools” was seen as important to the
participants. In the 2012 survey the question was asked, “How have you used the components
of your vision speech in your work with staff?” This mirrored set of questions allowed for a
comparison of targeted skill sets taught in the program. As one reviews the survey questions,
it is apparent that a core set of leadership skills emerges for use in authentic work. The
program participants found these skill sets very helpful as the moved into their positions as
school leaders. While this targeted list of skill sets noted in this program does not comprise a
comprehensive list of the skills school leaders use, it validates the efficacy of the ones taught
in the Aspiring Leaders’ Program. They are skill sets being used in real time with principals
facing real issues and doing so with success.

As can be seen in the raw overall totals shown in Table 2, the participant responses
about the program in the 2012 survey were overwhelmingly positive. Specific information
follows about some of the original activities participants undertook in relation to each of the
10 skill sets shown in Table 1. Typical participant reactions that emerged during the 2012
study are included for each skill. By reviewing the activities and responses in detail,
conclusions can be reached about the value of the program.

Developing a Vision

The Aspiring Leaders Program devoted a great deal of time to the issue of creating an
effective vision for schools. “Until educators can describe the ideal school they are trying to
create, it is impossible to develop policies, procedures, or programs that will help make that
ideal a reality” (DuFour & Eaker, 1998, p. 64). Making the connection between the creation
and the implementation of a vision was a major focus for the group. As DuFour and Eaker
(1998) note:

The process that is used to develop a vision statement can foster the pervasive support
and endorsement that make such a statement an effective instrument for change. The
most important question to ask in guiding the process is, “Will this strategy foster
widespread ownership?” (p. 66)

Based on the thinking of DuFour and others, participants were directed to assume the
role of a principal presenting a vision speech to faculty members. These vision speeches were
videotaped and reviewed by the group and the person presenting. In the survey results, the
participants overwhelmingly felt this exercise had a great influence on their role as a school
leader. One indicated, “I have used my vision speech to elicit and implement ideas and
strategies to support students.” Another participant noted, “It helped me to become specific
about what we were going to work on as a staff throughout the school year to support student
learning.”

23

Using and Analyzing Data

The program participants were given several exercises in data analysis. For example, they
were directed to work in teams on a set of state testing data for a particular school. They had
to comment on four forms of data: demographics, program, achievement, and perceptions. As
Bernhardt (2000) points out:

Analyses of demographics, perceptions, student learning, and school processes provide
a powerful picture that will help us understand the school’s impact on student
achievement. When used together, these measures give schools the information they
need to improve teaching and learning and to get positive results. (p. 14)

The teams were required to analyze all forms of data and to make recommendations to
a mock school improvement team. This challenged their thinking on using data and broadened
their view about the kinds of data that are necessary for accurate planning. As one participant
noted in the survey, “The Aspiring Leaders Program helped me plan effectively, learn how to
analyze data and utilize results, and, therefore, plan effectively and efficiently.” Another
participant commented, “With the use of data staff could more readily accept our current
reality, begin to build trust with each other and set our eyes on our purpose.”

Coaching Teachers

In the original program, the team of professors continually reinforced the notion that the
answer to improved student achievement exists within the classroom — that teachers, when
given the opportunity to coach each other, can find the answers for student achievement. In a
prior study, Joyce and Showers (2002) maintained that coaching contributed to the transfer of
staff learning: (a) that coached teachers used their newly learned strategies more appropriately
than uncoached teachers in terms of their own instructional objectives and theories of specific
models of teaching, and (b) that coached teachers exhibited greater long-term retention of
knowledge about, and skill with, strategies in which they had been coached and increased the
appropriateness of their use of new teaching models over time.

This focus on coaching teachers was validated in a statement by a participant who
noted how this had become a reality in her leadership practice. “I have used paired teaching
— teaching another’s class and having them observe, giving ideas that will help them improve
lessons — it has made a big impact on my staff.” Another participant noted that mentoring,
paired teaching, and observing other teachers’ classes had helped teachers improve their
lessons.

Evaluating Instruction

“Really knowing what the Danielson Rubric says has helped me a lot as I observe teachers.”
This comment from a program participant points to the importance of teacher evaluation as a
skill set for school leaders. The program’s professors provided practice in this area by
showing video clips of teachers teaching various subjects. Participants used the Danielson
Rubric (Danielson, 2007) to evaluate what they observed. As they had opportunities to
practice using the rubric, the participants’ confidence rose. Not only did their use of the rubric

24

improve, but they began to discuss the nature of quality teaching. These discussions added to
their leadership skills as their role in evaluation became clearer. Participants also reported
they were able to use the Danielson Rubric with more confidence in conjunction with
classroom visits and walk-through checklists. One participant noted, “Building consensus on
what ‘quality teaching’ really is has made a profound impact on the staff and their
craftsmanship, which I have been able to document in their evaluations.”

Collaborating With Others

“Collaboration fuels group development when individual members envision (a) the potential
of the group as a collective force in the school, and (b) the expanding capacity of the group for
accomplishing important work that individuals working in isolation would not be able to
achieve” (Garmston & Wellman, 2009, p. 21).

The program’s professors emphasized the important role school leaders play in
developing collaborative teams throughout the school — that getting teachers to engage in
collective inquiry to find the best way to teach a skill to students is the essence of quality
leadership. The program participants studied the Learning Cycle, which includes five
components: use knowledge, acquire knowledge, analyze knowledge, share/create knowledge,
and employ professional conversations (Knowles, in Brayman, Gray, & Stearns, 2010).

This is just one example of the methods in a larger “tool kit” that the participants
received relating to teacher collaboration. A participant reflected on his program experience
with this skill set in his 2012 survey response: “The cooperative learning nature of the cohort
helped to ground theory into practice through presentations and discussions.” Another
participant noted that “. . . trying to include all staff in decision-making has increased staff
buy-in and ownership of our programs.”

Leading Meetings

The original participants in the Aspiring Leaders Program indicated that they had trepidations
about their skill in leading meetings. During the program, the dynamics of successful
meetings were discussed. From setting the stage for risk-free dialogue to accomplishing
important work, all components of a staff meeting were analyzed, including (a) how to
distinguish the urgent from the important; (b) how to stay on track, on topic, and focused; (c)
how to use conflict constructively; (d) how to orchestrate space and materials; and (e) how to
make decisions that stay made (Garmston & Wellman, 2009).

In the 2012 survey, participants reported that the use of relationship builders at the
beginning of each meeting created a positive climate in which teachers were able to
communicate openly. As one participant noted, “In my staff meetings I’ve been clear that I’m
not an expert in everything but I am committed to learning what we need to know to move
forward.” The participants also noted that the program’s professors modeled effective meeting
practices as part of their classroom delivery of information. This modeling led to the
following comment: “Using staff meeting time for discussion and collaboration time has been
very valuable.”

25

Understanding Change

Understanding the dynamics of change and leading a change process were important
instructional components in the Aspiring Leaders Program. The five disciplines of Senge’s
(1990) change model were shared with the group: (a) team learning, (b) systems thinking, (c)
personal mastery, (d) mental models, and (e) building a shared vision. Each of these
disciplines was discussed with the participants, and they were required to apply the disciplines
to an authentic school situation. The participants also studied the components of Fullan’s
(2001) change model: (a) moral purpose, (b) understanding the process of change, (c)
knowledge creation, (d) relationships, and (e) coherence making. Two of the participants
shared the following responses to this focus on change:

I have spent a lot of time talking about change with my staff. We have had staff
reductions, a new reading series, and changes in staff and student demographics to
name just a few. I have used the presentations from the class and all of the
information.

I think the ideas we learned about starting the change process with a few key players
have helped me bring about successful changes within my school. Also the dialogue
about being a positive change agent with regards to attitude has helped.

Managing Conflict

The program participants were taught a method for managing conflict called Polarity
Management (Johnson, 1996). This process helped the future leaders understand how to move
from concentrating on the problem to developing an acceptable solution that all staff members
can accept. As Johnson (1996) indicates, leaders become more effective by using Polarity
Management because they are able to (a) save time and energy by not trying to solve
difficulties that are unsolvable, (b) anticipate and minimize problems that occur when
workplace dilemmas are not managed well, and (c) improve their decision making.

In the 2012 survey, participants noted that the use of Polarity Management had helped
them significantly in dealing with conflict resolution in their schools. One said, “Being more
assured that I can manage conflict, I am comfortable trying to include staff in decision-making
and increase their ownership of the issues at hand.” Another participant stated, “Holy cow! I
am currently in a position which had been held by one principal for 25 years. I had to lean
heavily on the class information on managing conflict to guide and support me.”

Understanding Generational Differences

As Lovely (2005) points out, “A significant and potentially problematic result of the changing
dynamics of the American work force is the growing infusion that brings young, old, and in-
betweens together into the same employment mix. . . . Recognizing the portrait of each
generation enables superintendents and other managers (principals) to hone in on employee
strengths, make weaknesses irrelevant and foster a greater appreciation for diversity” (pp. 30-
31).

26

An important skill set presented to the participants in the original program was dealing
with generational differences in school staffs. One participant reported on the importance of
learning about generational differences by saying, “I have staff who have 35 plus years of
experience and new staff as well. The information taught has been very useful working
through communication issues.” Another participant stated, “I have staff who say, ‘these
kids,’ so I have used the information to create a dialogue to change this attitude.” And a third
participant noted in the survey, “There is a generational gap between me and my staff; I have
used the information to help me work better with them.”

Developing School Improvement Plans

The process for developing school improvement plans was the area of study that brought all of
the other skill sets together, enabling the participants to see how to move a district forward.
The other skill sets all came into play as the participants worked with their colleagues and
staff to develop school improvement plans. The participants collected school improvement
plans from various districts and analyzed them. They discussed among themselves the
strategies they identified in each plan and asked themselves if these plans would work in their
school. With the opportunity to learn from other participants’ experiences and leadership
challenges, they were able to crystallize their own vision of themselves as leaders as they
worked with staff to implement a school improvement plan.

The participants read the work of Tony Wagner and his Harvard Change Leadership
Group. Using the school evaluation tool provided in the text “Change Leadership” (2006), the
participants had an opportunity to evaluate a hypothetical school’s readiness for change and
the implementation of a new school improvement plan. In the 2012 follow-up survey, one
participant stated:

Well I’ve tried and failed a few times but I was so happy to have that knowledge under
my belt. It gave me a great place to start, and now it’s a matter of really understanding
the people I work with so that I can match it with best practices in terms of change.

Another participant said, “Discussion on cultural competency, generational
differences, evaluations, and leading adaptive change have been the most valuable parts of the
program for me. The professors instilled confidence in me to take the next step which I will
never forget.”

DISCUSSION

The original cohort of participants gave overwhelmingly positive responses about the
Aspiring Leaders Program when initial interviews were conducted in 2009. The researchers
believed a more in-depth review of their responses would yield valuable insights into how the
skill sets continued to be used, and this led to the 2012 study. The spirit of a qualitative study
is to tell a story. As Miles and Huberman (1994) point out, “Qualitative studies take place in a
real social world, and can have real consequences in people’s lives . . . so we who render
accounts of it must maintain standards worth striving for” (p. 2). Thus, it was very important
that the 2012 survey capture the stories the respondents shared, not just data.

As shown by the quantitative and qualitative results of the 2012 survey, there was a
strong element of sustainability in the participants’ use of skills they learned in the original

27

program. Fullan (2005) defines sustainability as “the capacity of a system to engage in the
complexities of continuous improvement consistent with deep values of human purpose” (p.
ix). With this definition in mind, the 2012 follow-up study confirmed and validated the
importance of this program to the participants. The goal of the original program was to
improve the craftsmanship of the participants through mastery of a defined set of skills. This
study’s results verified that nearly all of the skills continue to be used by nearly all of the
participants who responded to the survey. This result was reinforced by the significant level of
congruence found in the participants’ responses in the 2009 and 2012 studies.

It is important to note that the team of three professors quickly identified the need to
utilize adult learning theory. Adults learn best when they can apply, within a short time frame,
what they have learned (Lindeman, in Knowles, Holton III, & Swanson, 2005). Thus, the
cohort structure was critically important to the success of this program. Participants were able,
in working with their cohort partners, to take their learning from initial knowledge to behavior
to usable skill. Having identifiable skill sets that could be immediately applied in the
participants’ professional lives became the goal of every lesson the professors presented.

This process was supported by the fact that all of the professors in the program had
been long-time urban school leaders. Therefore, all of the key instructional content had been
gleaned from and applied “in the field” by the professors in their previous jobs. The
combination of a customized curriculum, a strong cohort structure, and instruction by
seasoned practitioners resulted in a highly successful experience for the participants.

A final critical program element to note was the ongoing support and availability of
the professors. In both 2009 and 2012, the participants considered this to be an important part
of their success, as represented by the following comments:

• “The ongoing availability of the professors and the constant support went well
beyond the time frame of the program and was outstanding.”

• “I specifically consulted them about staff issues and was provided much support in
the area of difficult conversations with staff.”

• “The professors have always been available at any time during or after school
hours. They have also made themselves available for professional development in
our schools.”

Lessons Learned

Some of the lessons that emerged from a comparison of the 2009 data with the 2012 data
include the following:

• It was vitally important to listen to the needs of the participants. This was
demonstrated in the professors’ ability to provide relevant educational instruction
and resources/materials.

• The Aspiring Leaders Program was successful is giving the participants a set of
skills they could use in their leadership roles.

• The professors’ willingness to provide a safe atmosphere in which to discuss
current issues and challenges ultimately contributed to the program’s perceived
success and usefulness.

28

• It was important to maintain rigorous course requirements, which helped assure
program quality.

• The professorial team teaching model that was utilized greatly enhanced the
participants’ engagement in the program.

• To generate the greatest value for the participants, the professors had to be willing
to give of their own time and expertise on an ongoing basis long after the program
concluded.

Recommendations for Future Study and Partnerships

In addition to the lessons learned, the following recommendations are offered for the
consideration of any organization that might undertake a similar project:

• One area for future study is to ascertain what skills these participants were lacking
in their ability to successfully do their jobs. Having this information could result
in program adaptation that could inform future such training programs.

• In developing these partnerships, professors need to be willing to immerse
themselves in the participating school district’s system and any existing urban
initiatives and issues.

• University personnel need to approach these partnerships with flexibility and an
understanding that they will need to adapt existing curricula to meet the specific
needs of the urban educators/participants involved.

• A successful partnership requires the consistent presence of professors who are
willing to work with their urban educators/participants to build trusting
relationships; this will support a safe learning environment.

• These partnerships require a team approach, which models for the participants how
teamwork can move school improvement initiatives forward in their urban settings.

CONCLUSION

To a great extent, the successful future of our nation’s public schools lies in the hands of
school leaders. The 2008-2009 Aspiring Leaders Program sought to give future education
leaders in one urban school district the skill sets necessary to enable them to be highly
effective professionals. When the 2012 study was used to “check in” with the participants in
the program, it confirmed the efficacy of that training. Such university/K-12 school
partnerships should be encouraged across the country. They have the potential to positively
transform the future for our children.

REFERENCES

Angelle, P., Wilson, N., & Mink, G. (2011). Building Bridges through School-University
Partnerships. NCPEA Education Leadership Review. 12(1), 39-51.

Bernhardt,V. (2000). Designing and using data bases for school improvement. Eye on
Education.

Brayman, J., Gray, M., & Stearns, M. (2010). Taking flight to leadership and literacy:
Soaring to new heights in learning. Rowman & Littlefield. New York, NY.
In association with AASA.

29

Danielson, C. (2007 ). Enhancing professional practice: A framework for teaching.
Alexandria, VA. ASCD.

Dufour, R. & Eaker, R. (1998). Professional Learning Communities at Work. ASCD,
National Education Service. Bloomington, IN

Dufour, R. & Marzano, R. (2009, February). High level strategies for principal leadership.
Educational Leadership, 62-68.

Fullan, M. (2001). Leading in a culture of change. San Francisco: Jossey-Bass.
Fullan, M. (2005). Leadersip and sustainability. San Francisco: Jossey-Bass
Garmston, R. & Wellman, B. (2009). The Adaptive School: A sourcebook for developing

collaborative groups. Norwood, MA. Christopher-Gordon Publishers.
Johnson, B. ( 1996). Polarity Management. Amherst, MA. HRD Press.
Joyce, B. & Showers, B. (2002). Student achievement through staff development (3rd ed.).

ASCD. Alexandris, VA.
Knowles,M., Holton III, E., & Swanson, R. (2005). The adult learner: The definitive classic in

adult education and human resource development. 6th ed. Burlington, MA: Elsevier
Kouzes, J., & Posner, B. (2002) Encouraging the heart: A leader’s guide to rewarding and

recognizing others. San Francisco, CA: Jossey-Bass.
Kouzes, J., & Posner, B. (2003) The leadership challenge. San Francisco,CA: Jossey-Bass.
Lovely, S. (2005, September), Creating synergy in the school house. The School

Administrator, 8(62)
Marzano, R. (2003) What works in schools: Translating research into action. Alexandria, VA.

Association for Supervision and Curriculum Development.
Marzano, R., Pickering, D., Pollack, J. (2001). Classroom instruction that works: Research-

based strategies for increasing student achievement. Alexandria, VA. Association for
Supervision and Curriculum Development.
Marzano, R. J., Waters, T., & McNulty, B. A. (2005). School leadership that works: From
research to results. Alexandria, VA: Association for Supervision and Curriculum
Development.
Miles, M.B. & Huberman, A.M. (1994) Qualitative data analysis. Thousand Oaks,
CA: Sage Publications.
Northouse, P. (2007). Leadership theory and practice: And cases in leadership. Thousand
Oaks, CA: Sage Publications.
Senge, P. (1990). The fifth discipline. New York: Doubleday/Dell.
Voltz, D. (1998). Challenges and choices in urban education: The perceptions of teachers and
principals. The Urban Review. (30),3, 1998.
Wagner, T. et al. (2006). Change leadership. Jossey-Bass. San Francisco, CA.
Wallace Foundation (2003). http://www.wallacefoundation.org/learn-about-
wallace/Pages/default.aspx. Taken from the www on January 30, 2013.
Zempke, R., Raines, C.,& Filipczak, R. (2000). Generations at work. New York: Amacom.

30

Training Programme for Secondary School Principals:
Evaluating its Effectiveness and Impact

Disraeli M. Hutton
University of the West Indies

The article presents the evaluation of the training programme for secondary school principals
conducted in the period between 2006 and 2009. A mixed method approach was used to
conduct the summative evaluation with 28 graduate participants. For the impact evaluation,
15 of the graduates were interviewed three years after the programme was completed. The
quantitative data was analyzed using means and standard deviation. The findings revealed
that participants gained technical and relational skills but responses were less favourable in
relation to cognitive or conceptual skills, while the support from lecturers and workplaces
was strong but less favourable from the central ministry. There was a positive impact on
participants’ performance during the period of training, especially in the areas of confidence,
collegiality and overall leadership. For the impact evaluation, graduates credited the
programme for their promotion to become principals, vice principals, senior teachers or give
added responsibilities. They identified areas to be enhanced for any further programme to
include school law, financial management, policy development and interpretation, and
conflict management and relational skills. The main recommendation is that for any further
programme for the training of principals should be guided by the findings of the evaluation.

INTRODUCTION

The poor performance of the public education system of Jamaica has led to calls for
improvement in the quality of leadership provided by principals. Correspondingly, the weak
performance by students who sat the Caribbean Secondary Education Certificate (CSEC)
examination and various national examinations was in part blamed on the limitation of the
principals (Hutton, 2010). The response by the Task Force on Education Reform (2004) was
that “all principals are to get continuous training in school management and leadership in a
variety of accredited institutions” (p. 36). While the Task Force on Education Reform seemed
to imply a normal call for continuous upgrading or professional development, the reality was
that many principals had not exhibited the competencies and abilities to impact the
performance of schools (Hutton, 2010).

NCPEA International Journal of Educational Leadership Preparation, Vol. 8, No. 1– March 2013
ISSN: 2155-9635 © 2013 National Council of Professors of Educational Administration

31

In response to the concerns for the leadership deficiencies in the school system, a
training programme for principals and senior administrators was conducted by the School of
Education, University of the West Indies. On completion of the training programme, a
summative evaluation was completed followed by an impact evaluation three years after. The
purpose of this evaluative study was to (a) determine if the training programme addressed the
areas of weaknesses which were targeted for improvement by principals and other senior staff
who participated, and (b) identify challenges experienced with the implementation of the
programme.

Literature Review

There was a time when professional development in the education system was reserved for
teachers and administrators. However, it was soon recognized that the principals also needed
to display modern and effective management and leadership skills (Skria, Erlandson, Reed
and Wilson, 2001). Darling-Hammond, Meyerson, Lapointe and Orr (2010) identified
instructional and transformational leadership as the two interrelated and underpinnings factors
for effective school leaders. A unique assessment of the principal was advanced by Owings
and Kaplan (2012) who posited that over the period of stewardship as principal, their
“perspective experience and behaviour may change . . . (therefore their) career should be
considered flexible and fluid” (p. 517). The implication for this perspective is the necessity for
principals to receive training and professional guidance in order to function effectively at each
stage of their leadership development.

For the novice principals and others in training, Darling-Hammond et al. (2010)
established that the approach that works has “both peer support, including cohort groups and
collegial network; and expert supports, such as mentoring and coaching” (p. 75). Bossi and
Warren (2008) identifying the areas of focus for the Association of California School
Administrators/NTC said that the leadership training is individualized, supported by both
coaching and mentoring and it is both on-site and institutional based in its delivery mode.
Addressing the skills required of principals for the 21st century, Green (2010) postulated that
while instructional leadership is central, managerial competencies are necessary for effective
performance of schools.

Reeves (2002) described the traditional training programme for principals as
fragmented and proposed that it should be “a combination of research, case study, small group
work and personal reflection... (and it) should focus on four key areas: people, strategies,
organizations, and systems” (p. 162). Lovely (2004) endorsed Reeves (2002) view and
emphasised that for the new approach to the training of principals, “apprentice and intern
programmes allow prospects to get into the trenches and discover firsthand what is means to
be a learning leader. Apprentice and intern programmes for aspiring principals are an
excellent means of getting that experience to your candidates” (p. 40).

As we focus on how school leaders are selected and prepared, Fullan (2007) endorsed
the use of “succession practices to ensure the continual cultivation and flow of new leaders;
and the fostering of habits and practices that envision school heads as system leaders” (p.
296). Reeves (2002) suggested that (a) principals should emerge from the classroom, (b)
schools should become a centre for training in principalship, and (c) training should be
delivered by a variety of talents from the public, private and non-governmental sector entities.

32

The study of leadership has focused on leaders’ personal qualities, the skills they
acquire and their actual behaviour. Personal traits have been identified as important to
leadership effectiveness (Hoy & Miskel, 2005; Lunenburg & Ornstein, 2008). Hutton (2010)
noted that “high performing principals are self-confident and (they) believe in their ability to
provide leadership for the school to achieve the goals and objectives being pursued” (p. 6).

Hoy and Miskel (2005) identified a combination of skills and trait variables under
three broad categories: personality, motivation and skills. Yukl (2002) and Northouse (2004)
identified technical, interpersonal and conceptual skills as important for effective leadership.
Effective leadership in the 21st century is represented by an amalgamation of the: (a)
contingency approach which focuses on the actual situation requiring leadership input
(Hanson, 2003); (b) behavioural approach, which focuses on what leaders do--this was
established by the Iowa studies of the 1930s and Ohio and Michigan of the 1950s (Gorton,
Alston & Snowden, 2007), and (c) combination of traits and behaviours approach which is
characterized as leadership styles (Lunenburg & Ornstein, 2008). Therefore, it is evident that
the delivery of any training programme must be informed by both the lessons learned from the
experiences of an effective training programme and our knowledge of effective leadership
skills.

Programme Background

The certificate programme in School Leadership for Secondary School Principals began in
July 2006 as a 3-year project involving the School of Education and the Ministry of
Education. The aim of the programme was to train 75 secondary school principals, in the
principles and practices of effective school leadership. The programme came out of the
Memorandum of Understanding (MOU) between the Ministry and Finance and Planning and
the Jamaica Confederation of Trade Unions (JCTU), which represented a part of the
agreement to settle the salaries negotiation between the Government and the Jamaica Teachers
Association in 2006. An initial sum of J$3.4M was allocated to fund the programme, but this
was increased by J$2.8 in 2009 to fund the upgrading of the programme to a Post Graduate
Diploma in Educational Administration. Sixty nine (69) principals, vice principals and senior
teachers were recruited and trained between 2006 and 2009. All 19 students of Cohort 1
completed the certificate programme and the 27 of 31 students in Cohort II were also
successful. Cohort III, which pursued the Post Graduate Diploma in Educational
Administration, 16 of the 19 students completed the programme.

Of the 46 students from Cohorts I and II who did the Certificate programme, 19 of
them successfully completed the Post Graduate Diploma in Educational Administration and
graduated in 2011. Even though the programme was evaluated as successful by both client
and participants, the Ministry of Education took the decision not to continue it beyond Cohort
III because of the financial constraints. A number of the participants who were vice principals
and senior teachers have since been promoted to principals and vice principals, respectively.
The programme remains popular among graduates and those in the Ministry of Education who
were responsible for its successful implementation.

33

Programme Structure

The certificate programme was designed to commence during the summer and complete at the
end of the academic year. There were five, three-credit courses and a practicum and study.

The programme initially started with an introductory non-credit module in computing. The
third cohort of students was afforded the opportunity to pursue the Post Graduate Diploma in

Educational Administration. The students who had already completed the certificate version
of the programme were required to do an additional 12 credits to complete the Post Graduate

Diploma. Some 62 students were successful in completing the programme at either the
Certificate and Diploma or Diploma level only.

METHOD

Subjects

The sample selected for the summative evaluation consisted of 47 programme participants
who pursued the Post Graduate Programme in Educational Administration. Twenty eight (28)

persons completed the questionnaire, which represented a response rate of 60%. For the
impact evaluation, 15 graduates of the programme were interviewed to elicit their responses to

its effectiveness three years after they graduated. It should be noted that the small sample size
represented a significant limitation of the study.

Measure

A questionnaire containing quantitative and qualitative items was used to capture responses

from the participants. Quantitative responses were collected through the use of six sub scales.
The sub-scale items were formatted using a five point Likert scale, where responses ranged

from: 1=Strongly Disagree, 2=Disagree, 3=Undecided, 4=Agree 5=Strongly Agree. The sub
scales were created and used to assess: 1) the skills, knowledge and attitudes acquired through

the Principalship programme (16 items); 2) measure the support received by participants from
their schools, the Department of Educational Studies, and the Ministry of Education (11

items); 3) to ascertain participant feedback on the organisation and delivery of the programme
(6 items); 4) to assess the effectiveness of the principals’ training programme (6 items); 5)

assess how the programme prepared participants to develop and demonstrate personal skills
and abilities (6 items); and 6) the relevance of the programme to perform your duties and

responsibilities at the participants’ school (10 items). Qualitative responses were collected
through the use of three open-ended statements which sought to elicit addition information on

trainees’ assessment of the training programme. The statements are as follows:

1. List two benefits you have achieved/will achieve from participating in this
programme

2. List two things that you did not like about the programme
3. Indicate any other helpful comments related to the programme

For the impact evaluation a questionnaire guide consisting of ten questions were used

to obtain information from respondents. The questions were related to (a) personal benefits
gained as a result of the programme, (b) the overall impact of the programme three years after

34

completion, (c) areas they would target for improvement, and (d) areas that they would add or
remove from the programme.

Procedure

The questionnaires were administered on the final day of the programme for Cohorts 1 and 2.

Participants were asked to complete the instruments and return them to programme
administrators the same day or within a week after they handed out. The instruments were

emailed to the Cohort 3 who had completed the programme the previous year. For the impact
evaluation, respondents were contacted and telephone interviews were planned based on an

agreed time scheduled. The interviews took between 10 and 20 minutes, and this phase of the
data gathering was completed over a two-week period. Participants for the interview were

selected from the list of trainees who pursued the Post Graduate Diploma in Education
Administration.

Data Collection and Analysis

The data was collected using a Likert type scale, which focused on six categories, each

representing a sub scale. The categories included competencies acquired, quality of support,
programme facilitation, programme effectiveness, programme effect on personal skills and

abilities, and plant and facilities maintenance and development. The data were analyzed using
means, standard deviation and percentages. The qualitative data related to the summative

evaluation were analyzed using the inductive thematic analysis technique, which was outlined
in a word tree diagram (Thomas, 2003). For the impact evaluation, the themes were selected

based on the areas that were consistently emphasized by respondents. The questionnaire was
validated based on feedback provided by colleagues who participated in the programme along

with students who completed the programme. The results of the Cronbach Alpha test
conducted are shown in Table 1.

Table 1

Cronbach Alpha Results for each of the Six Sub Scales Measuring Participants’ Evaluation of
Principals’ Training Programmes

Scale Cronbach’s
Alpha
Competency Acquired Scale
Quality of Support Scale .967
Programme Facilitation Scale .797
Programme Effectiveness Scale .802
Programme effect on Personal Skills and Abilities Scale .908
Plant and Facilities Maintenance and Development Scale .942
.903

35

RESULTS

1. What were the skills, knowledge and attitudes acquired as a result of the principals’
programme?

The competencies acquired by participants in the training programme were measured by 16

items on a Likert-type sub scale. The means and standard deviation for the items are
illustrated in Table 2.

The analysis reveals that “information related to the running of school rules,
regulations and policies” (M=4.32, SD=0.9) was rated as the most frequently acquired skill on

the scale. The least acquired skill was “applying creative solutions to solve problems”
(M=3.92, SD=0.70). The mean for the sub scale was (M=4.63, SD=0.11) This indicates that

an average participants were in “strong agreement” with the positively worded items, thus
signifying that a substantial amount of skills and knowledge were garnered by programme

participants.

Table 2
Competencies acquired as a result of the programme

Item N Mean Std.

Deviation

1. Interpreting information related to the running of the school-rules, 25 4.32 0.9
regulations, policies, etc.

2. Formulating and communicating school vision 25 4.24 0.93

3. Displaying relational skills naturally and consistently 25 4.2 0.91

4. Communicating clearly with all stakeholders 25 4.2 0.82

5. Establishing sociable relationships with both school and non-school 25 4.16 0.90

communities 25 4.12 0.83
6. Evaluating staff performance

7. Establishing cooperative and collaborative relationships with 25 4.12 0.67

internal and external stakeholders 25 4.12 1.01
8. Being able to see the "big picture" related to the job

9. Maintaining student discipline 25 4.08 0.95

10. Interpreting and managing budget 25 4.08 0.95

11. Showing sensitivity, empathy, consideration, and tact 25 4.08 0.86

12. Being able to advance and or consider "big ideas" 25 4.04 1.09

13. Supervising and coordinating improvements in teaching and 25 4 0.95
25 4 0.76
learning
14. Understanding the impact of the world and local environment on

school life 25 3.92 0.95
15. Interpreting and using test results

16. Applying creative solutions to solve problems 25 3.92 0.70

Note: N = 28. Scale interpretation ranges for the scale means: 1 = Strongly Disagree (1.00-1.49), 2 = Disagree (1.50-2.49), 3
= Undecided (2.50-3.49), 4 = Agree (3.50-4.00), 5 = Strongly Agree (4.5-5). Scale M = 4.63 (SD =.11).

36

2. What was the quality of support received from your schools, Ministry of Education
(MoE), and administers of the programme from the School of Education?

The quality of support sub scale was used to measure how participant perceived the training
programme. Results in Table 3 shows that generally the teachers ‘agree’ with all the items
presented on the scale (M = 3.84, SD =.437). Specifically, participants indicated that support
came primarily from the programmes instructors, and they strongly agreed that the
“instructors were lenient with late assignment” (M = 4.84, SD =.74). Participants experienced
the least amount of support from the Ministry of Education (M = 3.23, SD =.83).

Table 3
Support received during the implementation of the programme

Item N Mean Std.

Deviation

1. The instructors were lenient with late assignments 21 4.38 0.74

2. My instructors demonstrated concern for the challenges I 21 4.38 0.59

was encountering during the programme

3. I received assistance from my school in carrying out the 21 4.33 0.73

responsibilities assigned to me when I was/am attending

classes

4. My superiors at school provided emotional support while I 21 4.14 0.79

attended the course

5. I received the travel support recommended by the MOE for 21 3.86 1.42

attending the programme

6. My academic department (UWI) demonstrated concern 21 3.86 0.66

regarding the needs of the students

7. Some of my personal responsibilities were taken on by other 21 3.76 1.22

family members while I attend classes

8. My academic department (UWI) was responsive to the needs 21 3.76 0.83

and concerns of the students

9. MOE was responsive to the needs and concerns of the 21 3.33 0.86

students

10. I received general support in preparing my assignments 21 3.23 1.37

11. MOE demonstrated concern regarding the needs of the 21 3.23 0.83

students

Note: N = 28. Scale interpretation ranges for the scale means: 1 = Strongly Disagree (1.00-1.49), 2 = Disagree (1.50-2.49), 3
= Undecided (2.50-3.49), 4 = Agree (3.50-4.00), 5 = Strongly Agree (4.5-5). Scale M = 3.84 (SD =.437).

3. How effective was the programme organized in order to facilitate your attendance
and participation in classes while continuing to perform your responsibilities at
school?

Table 4 shows the analysis of the programme facilitation sub scale. The results indicated that
on an average teachers ‘agree’ with all the items presented on the scale (M = 4.1, SD =.423).
The most appealing aspect of the programme’s organisation was the fact that “instructors were

37

cognizant of the fact that they were adult learners” (M = 4.5, SD =.67). Participants were
neutral with the view that the “programme was scheduled with due consideration not given to

job commitments” (M = 3.34, SD =1.47), thus ranking this item as the least appealing
component of the programme.

Table 4
Participant’s perception of the programme’s organisation

Item N Mean Std.

Deviation

1. Instructors were cognizant of the fact that we were 32 4.5 0.67

adult learners

2. The instructional techniques used were appropriate 32 4.41 0.62

3. The instructors were prepared to deliver the courses 32 4.38 0.49

they taught

4. The programme was executed in an efficient and 32 4.16 0.77

effective manner

5. The physical learning accommodation provided 32 4.06 1.11

were appropriate for the programme

6. The programme was scheduled with due 32 3.34 1.47

consideration given not our job commitments

Note: N = 28. Scale interpretation ranges for the scale means: 1 = Strongly Disagree (1.00-1.49), 2 = Disagree (1.50-2.49), 3
= Undecided (2.50-3.49), 4 = Agree (3.50-4.00), 5 = Strongly Agree (4.5-5). Scale M = 4.1 (SD =.423).

4. How did you rate the effectiveness of the principals’ training programme?

Table 5 shows that when asked to rate the effectiveness of the principals training programme,
the sub scale mean (M=4.1, SD=.423) indicated that on an average participants ‘agreed’ with

all statements in the sub scale. Overall, the statement with the highest rating was “I am
expected to play a leadership role in the implementation of the transformation programme in

my school” (M=4.55, SD=.57). The lowest rank item on the sub scale was “my supervisors
expected them to contribute more to the running of the school on completion" (M=4.39,

SD=.67).

Table 5
Participants perception of the programme’s effectiveness

Item N Mean Std.
Deviation
1. I am expected to play a leadership role in the 31 4.55 0.57

implementation of the transformation programme in my 0.68
0.63
school

2. I am expected to play a major leadership role in the 31 4.55

school

3. My superiors will expect me to contribute more to the 31 4.52

running of the school

38

4. I am expected to contribute more to the solving of 31 4.52 0.81

problems in the school

5. I am expected to develop the relationship between my 31 4.52 0.51

school and the wider school community

6. My supervisors expected me to contribute more to the 31 4.39 0.67

running of the school on completion

Note: N = 28. Scale interpretation ranges for the scale means: 1 = Strongly Disagree (1.00-1.49), 2 = Disagree (1.50-2.49), 3
= Undecided (2.50-3.49), 4 = Agree (3.50-4.00), 5 = Strongly Agree (4.5-5). Scale M = 4.1 (SD =.423).

5. How well did the programme prepare you to develop and demonstrate personal skills
and abilities?

Participants were provided with a sub scale to capture their views on ways in which the
programme prepared them to develop and demonstrate personal skills and abilities. Based on

the sub scale mean (M=4.5, SD=.077) respondents strongly agree with all the items on the sub
scale. Respondents were mostly in agreement with the view “I am willing to demonstrate

greater commitment to the development of the school” (see Table 6).

Table 6
Perception of skills and abilities acquired from participating in the programme

Item N Mean Std.

Deviation

1. I am willing to demonstrate greater commitment to the 32 4.66 0.70

development of the school

2. I am willing to work harder to achieve the goals of my 32 4.60 0.71

school

3. I am better able to challenge my superiors on policy 32 4.53 0.62

issues which i fee are suitable for my school

4. I am more confident that i will demonstrate greater 32 4.5 0.72

care in managing the resources and facilities in my

school

5. I am more confident in performing my responsibilities 32 4.47 0.95

6. I am better able to work with colleagues and other 32 4.46 0.72

constituents in my school

Note: N = 28. Scale interpretation ranges for the scale means: 1 = Strongly Disagree (1.00-1.49), 2 = Disagree (1.50-2.49), 3

= Undecided (2.50-3.49), 4 = Agree (3.50-4.00), 5 = Strongly Agree (4.5-5). Scale M = 4.5 (SD =.077).

6. How did you rate the relevance of the programme to perform your duties and
responsibilities at your school?

The results in Table 7 below produced a sub scale mean (M=4.63, SD=.15), which showed
that among respondents the training programme is of relevance to their duties and

responsibilities at school. The training programme was perceived to be of greatest relevance
to “leadership for school improvement” and of least relevance to “study”

39

Table 7
Participants’ perception of the programme relevance

Item N Mean Std.
Deviation
1. Leadership for School Improvement
2. Curriculum theory, planning and 28 4.82 0.39
28 4.75 0.52
development
3. Organizational behaviour in education 28 4.71 0.46

4. Educational supervision and evaluation 28 4.71 0.53

5. Human, Facilities and financial 28 4.68 0.67
management in schools 0.83
0.57
6. Legal and Professional Competencies for 28 4.64
Educational Administrators

7. Issues in Jamaican education 28 4.61

8. Action research in educational 28 4.61 0.57
28 4.43 0.88
administration
9. Practicum

10. Study 28 4.32 0.98

Note: N = 28. Scale interpretation ranges for the scale means: 1 = Not Important (1.00-1.49), 2 = Somewhat Important (1.50-
2.49), 3 = Undecided (2.50-3.49), 4 = Important (3.50-4.00), 5 = Very Important (4.50-5.00). Scale M = 4.63 (SD =.15).

7. What personal gains (including promotion, increased responsibilities, etc.) you have
achieved as a result of the principals’ training programme? F45-50

Figure 1 shows that the majority of the respondents (40%) were of the view that their
participation in the training programme resulted in them having greater respect from
colleagues, while 32 % share the view that they were better able to manage specific
programmes. On the other hand, 20% noted that they were assigned additional duties at their
school. Only 8 % obtain a promotion upon completion of the training programme.

40

Figure 1. Participants perception of personal gains obtained from the programme

DISCUSSION

Quantitative Data Analysis

The high rating of the programme content by participants is an indication that they felt the

programme targeted some of the critical areas which were related to deficiencies in their
performance as principals and school leaders. Additionally, the high rating must be seen in

light of the fact that (a) participants selected were practising administrators; (b) they were able
to determine if the knowledge and performance gaps were covered by the content; and (c) it

was a collaborative effort mainly between the central ministry and the University of the West
Indies (UWI), which would naturally improve the relevance of the programme content.

Of particular relevance to the Jamaican situation, is the optimum approach to the
training of school administrators in an American school system. As Lovely (2004) pointed out

“to better align school districts’ needs with principal preparatory programmes, partnership
needs to be established between university and a single or consortium of districts” (p. 29).

Darling-Hammond et al. (2010) reminded us that the principals’ competencies are not limited
to what they do but also what they know; therefore, it is when theory and practice are brought

together that the effective principal is truly created.
One of the main areas of concern expressed by the programme participants was the

quality of support received from Ministry of Education, the UWI administrators and family to
completing academic and work-based assignments. There were also constant complaints

surrounding issues such as special allowances that should have been provided by their schools
as directed by the central ministry. Darling-Hammond, Lapointe, Meyerson, Orr and Cohen

(2007) emphasized the importance of providing sufficient support for training, especially
those which include professional development programmes “offered free of charge, (with)

tuition waivers, release time to facilitate clinical fieldwork, and paid internships” (p. 96).
Even with the concerns raised by programme participants, aspects of the organization

and execution of the programme received high ratings from them. For example, participants

41

were permitted by central ministry to be away from school three days per week in order to
participate in the programme. At the same time, when school assignments were not
completed because programme participants were away at the UWI, a new time to complete
outstanding tasks had to be found, which naturally increased the burden on the trainees. For
future programme, this area has to be addressed although participants must expect to make
some sacrifices, especially when they will benefit personally from a programme designed and
implemented for their own upgrading.

There was high expectation for programme participants to perform more effectively
after completing the programme. This was borne out of the fact that some persons were
elevated to the post of principals or other senior administrative positions. For the others, they
were given added responsibilities which assisted in building their own confidence.
Participants were more willing and better prepared to operate at the strategic level in terms of
goal setting, policy issues, and the general operation of the schools. This is a good outcome of
the programme because there is the danger of focusing on operational issues “at the expense
of their more strategic imperatives” (Fullan, 2008, p. 4). In addition to the skills related to
organization and governance, their relational skills were also enhanced based on working with
colleagues and other school constituents. This is most important because principals should be
able to “determine the strengths and expectations of those individuals, gain insights into their
values, beliefs, interests, levels of motivation, and understanding how they view the school
and the behaviour of the leader” (Green, 2010, p. 50).

The relevance of the programme was confirmed by the respondents in terms of the
duties and responsibilities they had to perform. This was possible because courses were
determined by the limitations exhibited by principals and other administrators in the school
system. Within a collaboration framework, the Ministry of Education and the UWI identified
the areas of greatest needs. The fit between the skills required to perform effectively as
administrators and the content delivered seemed to match as far as programme participants
were concerned.

The need for better programme alignment was confirmed by Lovely (2004) as
important if principals are expected to improve performance. Further, Darling-Hammond et
al. (2007) pointed out that one of the major problems with traditional training programme was
the “misalignment between program content and candidate needs” (p. 7). The confidence that
participants expressed in their willingness to take on additional administrative responsibilities
while they were doing the programme should be specially noted; furthermore, coupled with
the fact that their superiors were willing to assign them additional tasks, must at least be
credited to the relevance and appropriateness of the programme.

Qualitative Data Analysis

Participants identified the development of confidence in self as one of the enduring benefits of
the programme. This resulted from the exposure, experience and competencies gained from
the courses and the overall programme setting, which allowed them to perform their
leadership responsibilities with a greater level of certainty and resolve. As Oyer (2011)
explained, confidence is a personal trait that does matter because it “is an essential attribute of
effective leaders” (p. 109). According to respondents, the collegial atmosphere was also a
central factor responsible for the confidence gained. It bolstered them to execute the roles they
were assigned with greater level of effectiveness and efficiency.

42

Confidence was also associated with their ability to better manage human resources
and apply the knowledge of culture to how things were done in the schools. Their ability to
assist others in the schools, improve linkages with external organizations and performing
duties related to curriculum matters was also related to confidence gained as a result of the
programme. Their introduction to ideas and practices based on the most recent research on
leadership and the effort made by programme providers to be more responsive and relevant to
the needs of the students would have been central in building the confidence factor. Patterson
and Kelleher (2005) noted that the level of confidence can make a difference between
effective and ineffective performance. The training programme definitely enhanced
confidence which intern improved the performance of the principals in training.

The collegial relationship which developed among the principals and other
administrators in training became a source of inspiration for each person. Although Robbins
and Alvy (2009) noted that “collegiality that exists when staff members collaborate is not
created over night” (p. 115), in actuality, collegial relationships served as a parallel learning
opportunity and source of information to address real problems in the schools. In these
instances, colleagues shared their experiences and perspectives among each other.
Subsequently, greater insights were gained regarding the resolution of intractable issues faced
by administrators in their respective schools The help of colleagues was also credited for
assisting them to complete their post graduate diploma, gaining information of human
resources and, importantly, managing the budgets more effectively. Additionally, building
effective networking with other principals and understanding the vital role of stakeholders
were linked to the collegial spirit that was fostered by the programme.

Factors related to time and work were identified as limitations to the programme. The
lack of time impacted performance in five areas:

1. Programme participants expressed concern that assignments could not be
completed satisfactorily because of the time constraints which created a stressful
learning situation.

2. The programme itself was felt to be too time consuming because it was crammed
over a period of one year.

3. Participants were required to continue to perform at their post even though they
were away from the job sometimes three days per week.

4. The volume of work was also noted as one of the factors which had significant
time implications.

5. The number of assignments given by lecturers was identified as one of the areas
directly related to the volume of work.

The participants recommended that the programme should be broadened to include the
other levels of leadership in the school system including, vice principals, heads of
departments, and senior staff with major administrative responsibilities. Some persons
expressed the view that all principals should be exposed to at least selected modules of this
programme. Overall, the programme was seen as both timely and relevant with special credit
given to the hands-on nature of the courses. It was recommended that this programme should
be a prerequisite course of study for those selected for leadership responsibilities.

43

Programme Impact—Three Years After

A follow up evaluation was conducted with 15 of the participants who completed the
programme, with the aim of ascertaining the impact of the programme. All 15 respondents
indicated that the programme had a significant impact on their performance as administrators.
One respondent said that “we have learned to improve the way we administer and lead” and
the action research project that she completed “has been used to guide the development of the
school’s safety and security programme.” Five of the individuals were promoted to either
principals or vice principals and they all confirmed that the programme played an important
role in their promotion. Affirming the role played by the programme in his elevation to the
chief administrator at his school, one of the participants said “we got a chance to practice what
we learned while on the job and this assisted in my promotion to the principalship.” Another
respondent indicated that the programme provided much information on school operations and
the presentation on the new and expanded role of the principals in the present school system
was enlightening.

As indicated in the initial evaluation, respondents restated the significance of sharing
of experience among programme participants. They were able to relate to each other the
problems they faced on a day-to-day basis and received suggestions and strategies to apply.
Further, they were able to discuss the interventions and make adjustments after further
discussions with their colleagues. One point of note was the number of experienced principals
and other senior administrators who were instrumental in sharing tested formulas for
addressing a myriad of problems including student discipline, school-community relationship,
relationship with the central ministry and regional offices, among others. In fact, this could be
considered a pre-networking formation because the relationship continued for some three
years after the conclusion of the programme. The most beneficial experiences for most
persons include (a) delivery of some courses, including organizational development; (b)
sharing of ideas during class discussions and break periods; and (c) communicating with
colleagues when they were actually on the job. In fact, these experiences could be
characterized as networking being built from infancy.

The practicum experience was also cited as beneficial to the programme participants.
Those who were given the opportunity to do their practicum at business enterprises had high
praises for the experience gained, which in some cases were very different from what would
obtain in a school setting. It was noted that decisions taken were carried out with dispatched,
and management was less tolerant with persons who failed to perform; furthermore, there was
no doubt regarding the priorities the companies identified as important. Areas such as
production, accountability and quality were constantly emphasized and everyone was
expected to play his/her role to achieve agreed targets. This was in contrast to the more
laidback and lackadaisical approach taken by some school leaders. It was suggested that
school leadership should be exposed to the practices of businesses in order to transfer some of
these qualities to the school organization.

The courses identified as most useful by the respondents included: financial
management and facilities maintenance. Darling-Hammond et al. (2010) emphasized the need
for “designing and implementing programmes that produce strong principals requires an
understanding of how to organize and finance components and supports” (p. 133). With
secondary schools operating annual budgets of over J$20M and some are involved in income
earning activities, it is important that principals, vice principals and other senior officers of the

44

school system have substantial training so as to interpret the financial statements and
understand the financial matters of the school. In fact, school principals are classified as the
‘accountable officer’ (Financial Administration and Audit Act, 1996, 74A) so they clearly
have a functional responsibility for the financial affairs of schools.

Facilities maintenance was also one of the areas introduced for the first time in a
formal training programme related to school administration. Increasingly, it is recognized that
the quality of the physical plant impacts learning and students’ attitude to schooling.
Commenting on the importance attached to the quality of the school facilities by principals,
Hutton (2010) indicated that “the high performing principals posited the view that, in addition
to facilitating learning, the physical environment and the quality of the facilities are true
representations of the conscience of the school and the pride the school community has in
itself and stakeholders” (p. 13).

One of the main areas of complaint after three years since the conclusion of the
programme related to how the programme was administered. Concerns were raised regarding
the journey which some persons had to make across the Island to the location at UWI. One
person suggested that if the programme was done on a residential basis, it would not have
been so stressful, especially for the participants from the rural areas. It is prudent, therefore,
that any new programme for the training of principals must take into consideration the general
impact on the participants.

It seems that a programme that is designed and implemented by the central ministry
must address the issues that will limit the performance of participants in the programme.
Especially in a period of financial exigency, areas such as travel, time off, and formal staff
replacement must be addressed before programme participants commence their programme.
Given that participants have to travel long distances from the rural areas to attend classes in
Kingston, Jamaica; the programme should be offered at alternative locations in order to
minimize some of the problems identified above. Additionally, the use of online facilities
would significantly address the problems related to travel.

A number of recommendations were made to improve the content of the programme.
First, a number of graduates who were interviewed indicated that there is a need for principals
to be equipped with effective interpersonal and conflict management skills. They emphasized
its necessity because with some schools having over 70 teachers and in excess of 1500
students, conflicts would naturally arise, and it is the principal who is expected to intervene.
One respondent pointed out that “in order to lead staff and motivate them effectively, human
relation and interpersonal skills are necessary.” Emphasizing the urgency for principals to
acquire these skills, one principal said that “sometime the conflict and antagonist relationship
are displayed by the teachers who are the trained professionals.”

The second area that was emphasised was the need to provide principals with the
competencies to develop policies. Increasingly, schools which are broadening their scope of
activities to include the wider community must introduce policies to effectively manage areas
of agreements related to business contractions or joint ventures. This is in keeping with the
thrust of the central ministry to decentralize some of the areas related to governance and
authority to the lower levels of the education system, including the schools (Hutton, 2009).
Additionally, principals’ ability to interpret existing policies, according to Skrla, Erlandson,
Reed and Wilson (2001), are regional or national attainable and must be enhanced if they are
to act in accordance with policy guidelines.

45


Click to View FlipBook Version