The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

35_Hossein_Tavakoli]_A_Dictionary_of_Research_Method_765

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by soedito, 2017-08-31 06:39:56

35_Hossein_Tavakoli]_A_Dictionary_of_Research_Method_765

35_Hossein_Tavakoli]_A_Dictionary_of_Research_Method_765

two-way table 689

two-way mixed design
another term for SIMPLE MIXED DESIGN

two-way repeated measures ANOVA

also two-way repeated measures factorial ANOVA, two-way within-
subjects ANOVA, two-factor within-subjects ANOVA, two-way within-
groups ANOVA, two-factor within- groups ANOVA

a parametric inferential procedure performed when both FACTORs (i.e.,
INDEPENDENT VARIABLE) are WITHIN-SUBJECTS FACTORs. Because each
participant produces scores on all the CONDITIONs, the scores for each
participant are related. In two-way repeated measures ANOVA the same
participants do the two or more conditions on one factor and also do the
two or more conditions on the other factor. An example would be an ex-
periment in which the first factor is a comparison between learning lists
of long or short words. The second factor is a comparison between fast
and slow rates of presenting the lists of words, as shown below:

Variable A (length of words)
Condition A1 (short words)
Condition A2 (long words)
Variable B (presentation rates)
Condition B1 (fast rate)
Condition B2 (slow rate)

In two-way repeated measures ANOVA the same participants will be do-
ing the two conditions on variable A and also the two conditions on vari-
able B. The same participants will be doing all four conditions listed in
Table T.6.

 Greene & Oliveira 2005

Condition 1       Short words (A 1) presented at a fast rate (B 1)

Condition 2       Short words (A 1) presented at a slow rate (B 2)

Condition 3       Long words (A 2) presented at a fast rate (B 1)

Condition 4       Long words (A 2) presented at a slow rate (B 2)

Table T.6. Same Participants Doing all Four Conditions

two-way repeated measures factorial ANOVA
another term for TWO-WAY REPEATED MEASURES ANOVA

two-way table
see CONTINGENCY TABLE

690 two-way within-groups ANOVA

two-way within-groups ANOVA
another term for TWO-WAY REPEATED MEASURES ANOVA

two-way within-subjects ANOVA
another term for TWO-WAY REPEATED MEASURES ANOVA

two-way within-subjects design
see WITHIN-SUBJECTS DESIGN

type I error
also alpha (α) error, false positive, false alarm
an error in decision making or HYPOTHESIS TESTING that results from re-
jecting a NULL HYPOTHESIS (H0) when, in fact, it is true. Type I error oc-
curs when there is really no difference between the POPULATION PARAM-
ETERs being tested, but the researcher is misled by chance differences in
the SAMPLE DATA. In other words, it occurs when a researcher errone-
ously concludes that there is a difference between the groups being stud-
ied when, in fact, there is no difference and, thus, concludes that a false
ALTERNATIVE HYPOTHESIS is true.
Consider the first row of the Table T.7 where H0 is in actuality true.
First, if H0 is true and we fail to reject H0, then we have made a correct
decision; that is, we have correctly failed to reject a true H0. The PROBA-
BILITY of this first outcome (i.e., correct acceptance) is known as (denot-
ed by) 1 - ALPHA (α). Second, if H0 is true and we reject H0, then we
have made a Type I error. That is, we have incorrectly rejected a true H0.
Our sample data has led us to a different conclusion than the population
data would have. The probability of this second outcome (making a Type
I error) is known as α. Therefore, if H0 is actually true, then our sample
data lead us to one of two conclusions: Either we correctly fail to reject
H0, or we incorrectly reject H0. The sum of the probabilities for these two
outcomes when H0 is true is equal to 1, i.e., (1 - α) + α = 1. The more
concerned a researcher is with committing a Type I error, the lower the
value of alpha the researcher should employ. A Type I error is therefore
a false positive judgment concerning the validity of the mean difference
obtained.
By contrast, a Type II error (also called beta (β) error, false negative)
occurs when the researcher fails to reject H0 when it is, in fact, false (i.e.,
one concludes that a true alternative hypothesis is false). In this case, the
researcher concludes that the TREATMENT or INDEPENDENT VARIABLE
does not have an effect when, in fact, it does. That is, the researcher con-
cludes that there is not a difference between the two groups being studied
when, in fact, there is a difference.

type I error 691

 (a) For General case decision reject H 0
fail to reject H 0 Type I error 
state of nature (reality) correct decision
H 0 is true (α)
(1 − α) correct decision
H 0 is false Type II error  (1 − β) = power

(β)

(b) For example umbrella/rain case                

                                     decision

state of nature (reality) fail to reject H 0 reject H 0
carry umbrella
do not carry umbrella
Type I error
H 0 is true, no rain correct decision look silly
(α)
no umbrella needed
correct decision
(1 − α) stay dry

H 0 is false, rains Type II error (1 − β) = power

get wet

(β)

Table T.7. Statistical Decision Table

Consider the second row of the Table T.7 where H0 is in actuality false.
First, if H0 is really false and we fail to reject H0, then we have made a
Type II error. That is, we have incorrectly failed to reject a false H0. Our
sample data has led us to a different conclusion than the population data
would have. The probability of this outcome—accepting the null hypoth-
esis when the alternative hypothesis is true—is known as β (beta). Sec-
ond, if H0 is really false and we reject H0, then we have made a correct
decision; that is, we have correctly rejected a false H0. The probability of
this second outcome (i.e., correct rejection) is known as 1 - β (also re-
ferred to as POWER). Thus, if H0 is actually false, then our sample data
lead us to one of two conclusions: Either we incorrectly fail to reject H0,
or we correctly reject H0. The sum of the probabilities for these two out-
comes when H0 is false is equal to 1, i.e., β + (1 - β) = 1. A Type II error
is therefore a false negative judgment concerning the validation of the
mean difference obtained.
Consider the following example, as shown in part (b) of Table T.7. We
wish to test the following hypotheses about whether or not it will rain
tomorrow. Again there are four potential outcomes. First, if H0 is really
true (no rain) and we do not carry an umbrella, then we have made a cor-
rect decision as no umbrella is necessary (probability = 1 - α). Second, if
H0 is really true (no rain) and we carry an umbrella, then we have made a
Type I error as we look silly carrying that umbrella around all day (prob-
ability = β). Third, if H0 is really false (rains) and we do not carry an

692 Type-token ratio

umbrella, then we have made a Type II error and we get wet (probability
= α). Fourth, if H0 is really false (rains) and we carry an umbrella, then
we have made the correct decision as the umbrella keeps us dry (proba-
bility = 1 - β).
One can totally eliminate the possibility of a Type I error by deciding to
never reject H0. That is, if we always fail to reject H0, then we can never
make a Type I error. One can totally eliminate the possibility of a Type II
error by deciding to always reject H0. That is, if we always reject H0,
then we can never make a Type II error. With these strategies we do not
even need to collect any sample data as we have already decided to nev-
er/always reject H0. Taken together, one can never totally eliminate the
possibility of both a Type I and a Type II error. No matter what decision
we make, there is always some possibility of making a Type I and/or
Type II error.
In qualitative data a Type I error is committed when a statement is be-
lieved when it is, in fact, not true, and a Type II error is committed when
a statement is rejected when it is, in fact, true.
see also CONFIDENCE INTERVAL

 Marczyk et al. 2005; Porte 2010; Cohen et al. 2011; Clark-Carter 2010; Leary 2011;
Sheskin 2011; Upton & Cook 2008; Sahai & Khurshid 2001; Kirk 2008

Type-token ratio
a measure of lexical diversity which involves dividing the number of
types by the number of tokens, e.g., types can refer to the different words
that are used in one data set, and tokens can refer to the number of repeti-
tions of those words.

 Mackey & Gass 2005

Type II error
see TYPE I ERROR

typical case sampling
see PURPOSIVE SAMPLING

U

U
an abbreviation for MANN-WHITNEY U TEST

unbalanced design

also nonorthogonal design

a term which is used to denote a FACTORIAL DESIGN with two or more
FACTORs having unequal numbers of observations or values in each cell
or LEVEL of a factor. There are at least three situations in which you
might have an unbalanced design. One is if the SAMPLEs are proportional
and reflect an imbalance in the POPULATION from which the sample
came. Thus, if we knew that two-thirds of second language (L2) students
were female and one-third male, we might have a sample of L2 students
with a 2:1 ratio of females to males. For example, we might look at the
way male and female L2 students differ in their exam performance after
receiving two teaching techniques—seminars or lectures. With such pro-
portional data it is legitimate to use the WEIGHTED MEANs analysis.
A second possible reason for an unbalanced design is that participants
were not available for particular TREATMENTs but there was no systemat-
ic reason for their unavailability; that is, there is no connection between
the treatment to which they were assigned and the lack of data for them.
Under these circumstances it is legitimate to use the UNWEIGHTED
MEANS ANALYSIS or the LEAST SQUARES METHOD OF ANALYSIS.
A third possible reason for an unbalanced design would be if there were
a systematic link between the treatment group and the failure to have da-
ta for such participants; this is more likely in a quasi-experiment.
Given the difficulties with unbalanced designs, unless you are dealing
with proportional samples, some people recommend randomly removing
data points from the treatments that have more than the others. Alterna-
tively, it is possible to replace missing data with the mean for the group,
or even the overall mean. If you put in the group mean you may artifi-
cially enhance any differences between conditions, and if you use the
overall mean to replace missing data you may obscure any genuine dif-
ferences between groups. If either of these methods is used then the total
degrees of freedom should be reduced by one for each data point esti-
mated.

 Clark-Carter 2010; Sahai & Khurshid 2001; Hatch & Lazaraton 1991

uncorrelated design
another term for BETWEEN-SUBJECTS DESIGN

694 uncorrelated t-test

uncorrelated t-test
another term for INDEPENDENT SAMPLES t-TEST

unequal variance t-test
another term for WELCH’S TEST

ungrouped variable
another term for CONTINUOUS VARIABLE

unidimensional scale
see CUMULATIVE SCALE

unidirectional hypothesis
another term for DIRECTIONAL HYPOTHESIS

unimodal distribution
see MODE

unique variance
see MULTIVARIATE ANALYSIS OF VARIANCE

unitary trait hypothesis
see INTEGRATIVE TEST

univariate analysis
the analysis of single VARIABLEs without reference to other variables.
The commonest univariate statistics are DESCRIPTIVE STATISTICS such as
the MEAN and VARIANCE. When a data distribution for one DEPENDENT
VARIABLE (DV) of interest is displayed this is called a univariate distri-
bution; the BAR CHART and HISTOGRAM are examples of univariate
graphic displays. Univariate statistical analysis does not imply analysis
involving only one variable, there may be one or more INDEPENDENT
VARIABLEs. For example, a researcher may want to investigate differ-
ences, in final examinations performance, among different groups of stu-
dents. The DV, performance in final examinations, may be explained by
a student’s age (classified as mature candidate, not mature) and gender.
ANALYSIS OF VARIANCE, which is a classical example of a univariate sta-
tistical analysis, may well be an appropriate statistical procedure to use.
This would still be a univariate analysis because the research question re-
lates to whether there are any differences between groups with respect to
a single DV.
Univariate analysis is used in contrast to BIVARIATE and MULTIVARIATE

unstructured interview 695

ANALYSIS involving measurements on two or more variables simultane-
ously.

 Cramer & Howitt 2004; Peers 1996

univariate distribution
see UNIVARIATE ANALYSIS

universe score
see GENERALIZABILITY THEORY

unobserved variable
another term for LATENT VARIABLE

unobtrusive measure
see HAWTHORNE EFFECT

unplanned comparison
another term for POST HOC TEST

unrelated design
another term for BETWEEN-SUBJECTS DESIGN

unrelated samples
another term for INDEPENDENT SAMPLES

unrelated t-test
another term for INDEPENDENT SAMPLES t-TEST

unrestricted question
another term for OPEN-FORM ITEM

unstandardized partial regression coefficient
see STANDARDIZED PARTIAL REGRESSION COEFFICIENT

unstandardized regression coefficient
another term for UNSTANDARDIZED PARTIAL REGRESSION COEFFICIENT

unstructured interview
a conversational type of INTERVIEW in which the questions arise from the
situation. Unstructured interview involves a particular topic or topics to
be discussed but the interviewer has no fixed wording in mind and is
happy to let the conversation deviate from the original topic if potentially
interesting material is touched upon; the participant is free to talk about
what s/he deems important, with little directional influence from the re-

696 unstructured observation

searcher. The intention is to create a relaxed atmosphere in which re-
spondents may reveal more than they would in formal contexts, with the
interviewer assuming a listening role. The respondents will be allowed to
develop the chosen theme as they wish and to maintain the initiative in
the conversation, while the interviewer will restrict himself/herself to en-
couraging the respondents to elucidate further whenever they touch upon
a topic that seems interesting. Naturally, the interviewer will also have to
exercise a degree of control by leading the respondents back to the point
if they begin to digress towards subjects that have nothing to do with the
issue under examination. Should the respondents go off at a tangent, the
interviewer will bring them back to the main theme. Though the basic
theme of the conversation has been chosen beforehand, unforeseen sub-
themes may nevertheless arise during the interview. If these are seen to
be relevant and important, they will be developed further. Thus, different
interviews might emphasize different topics. Moreover, some respond-
ents have more to say than others; some are more outgoing, while others
are more reserved. In addition, the empathetic relationship that is built up
during the course of the interview varies from case to case; some inter-
viewees will get on the same wavelength as the interviewer, develop a
relationship of trust with him/her and reveal their innermost feelings and
personal reflections, while in other cases this mechanism is not triggered.
It, thus, follows that the interviews will have an extremely individual
character and will differ widely in terms of both the topics discussed and
the length of the interview itself. Such a technique could be used when a
researcher is initially exploring an area with a view to designing a more
structured format for subsequent use. In addition, this technique can be
used to produce the data for a CONTENT ANALYSIS or even for a qualita-
tive method such as DISCOURSE ANALYSIS.
see also ETHNOGRAPHIC INTERVIEW, NON-DIRECTIVE INTERVIEW, SEMI-
STRUCTURED INTERVIEW, STRUCTURED INTERVIEW, INTERVIEW GUIDE,
INFORMAL INTERVIEW, TELEPHONE INTERVIEW, FOCUS GROUP

 Clark-Carter 2010; Corbetta 2003; Dörnyei 2007; Heigham & Croker 2009

unstructured observation
see OBSERVATION

unsystematic error
another term for RANDOM ERROR

unsystematic variance
another term for ERROR VARIANCE

unweighted mean
see WEIGHTED MEAN

utility 697

unweighted means analysis
a method of analysis in two-way and higher-order FACTORIAL DESIGNs
containing unequal numbers of observations or values in each cell. The
procedure consists of calculating the cell means and then carrying out a
balanced data analysis by assuming that the cell means constitute a single
observation in each cell
see also UNBALANCED DESIGN

 Sahai & Khurshid 2001

U-shaped distribution
an asymmetrical FREQUENCY DISTRIBUTION having general resemblance
to the shape of the letter U. As shown in Figure U.1, the distribution has
maximum frequencies at both ends of the distribution, which decline rap-
idly at first and then more slowly, reaching a minimum between them.

 Sahai & Khurshid 2001

YY

X X

(a) Histogram (b) Continuous Curve

Figure U.1. Two U-Shaped Distributions

U test
an abbreviation for MANN-WHITNEY U TEST

utility
the facet of VALIDITY that is concerned with whether measurement or
observational procedures are used for the correct purposes. If a procedure
is not used for what it was originally intended for, there might be a ques-
tion as to whether it is a valid procedure for obtaining the data needed in
a particular study. If it is used for something other than what it was orig-
inally designed to do, the researcher must provide additional evidence
that the procedure is valid for the purpose of his/her study. For example,
if you wanted to use the results from the TOEFL to measure the effects
of a treatment over a two-week training period, this would be invalid. To
reiterate, the reason is that the TOEFL was designed to measure lan-
guage proficiency, which develops over long periods of time. It was not

698 utility

designed to measure the specific outcomes that the treatment was target-
ing.

 Perry 2011

V

V
an abbreviation for CRAMER’S V. Also an abbreviation for PILLAI’S CRI-

TERION

validity
the degree to which a study and its results correctly lead to, or support,
exactly what is claimed. Generally, validity refers to the appropriateness,
meaningfulness, correctness, and usefulness of the inferences a research-
er makes. Validation is the process of collecting and analyzing evidence
to support such inferences. Validity is a requirement for both QUANTITA-
TIVE and QUALITATIVE RESEARCH. In qualitative data validity might be
addressed through the honesty, depth, richness and scope of the data
achieved, the participants approached, the extent of TRIANGULATION and
the disinterestedness (see CONFIRMABILITY) of the researcher. In quanti-
tative research validity might be improved through careful SAMPLING,
appropriate instrumentation and appropriate statistical treatments of the
data. Quantitative research possesses a measure of STANDARD ERROR
which is inbuilt and which has to be acknowledged. In qualitative data
the subjectivity of respondents, their opinions, attitudes, and perspectives
together contribute to a degree of BIAS. Validity, then, should be seen as
a matter of degree rather than as an absolute state.
There are several different kinds of validity: CONTENT VALIDITY, CRITE-
RION-RELATED VALIDITY, CONSTRUCT VALIDITY, INTERNAL VALIDITY,
EXTERNAL VALIDITY, FACE VALIDITY, CONSEQUENTIAL VALIDITY, CAT-
ALYTIC VALIDITY, ECOLOGICAL VALIDITY, CULTURAL VALIDITY, DE-
SCRIPTIVE VALIDITY, INTERPRETIVE VALIDITY, THEORETICAL VALIDITY,
and EVALUATIVE VALIDITY.

 Cohen et al. 2011; Dörnyei 2007; Fraenkel & Wallen 2009

validity coefficient
see CRITERION-RELATED VALIDITY

VARBRUL
a statistical package for DATA ANALYSIS often used in sociolinguistic re-
search.

 Mackey & Gass 2005

variability

also dispersion

the amount of spread among the scores in a group. For example, if the
scores of participants on a test were widely spread from low, middle to

700 variable

high, the scores would be said to have a large dispersion. To examine the
extent to which scores in a distribution vary from one another, research-
ers use measures of variability, i.e., descriptive statistics that convey in-
formation about the spread or variability of a set of data. The common
statistical measures of variability (also called measures of dispersion)
are VARIANCE, STANDARD DEVIATION, RANGE, and INTER-QUARTILE
RANGE. Less common is the COEFFICIENT OF VARIATION. Variability is
the complementary quality to the CENTRAL TENDENCY of a distribution.
Various data sets may have the same center but different amount of
spreads.

 Richards & Schmidt 2010

variable
any characteristics or attributes of an object or of a person that can have
different values from one time to the next or from one individual to an-
other. Variable is something that may vary, or differ from person to per-
son or from object to object. For example, you may be left-handed. That
is an attribute, and it varies from person to person. There are also right-
handed people. Height, sex, nationality, and language group membership
are all variables commonly assigned to people. Variables often attributed
to objects include temperature, weight, size, shape, and color. A person’s
proficiency in English as a foreign language may differ over time as the
person learns more and more English. Thus, proficiency in English can
be considered a variable because it may change over time or differ
among individuals. Variables can be quantified on different MEASURE-
MENT SCALEs depending on whether we want to know how much of the
variable a person has or only about the presence or absence of the varia-
ble.
The opposite notion to a variable is a constant, which is simply a condi-
tion or quality that does not vary between cases. It is a construct that has
only one value (e.g., if every member of a sample was 10 years old, the
age construct would be a constant). The number of cents in a United
States dollar is a constant: every dollar note will always exchange for
100 cents; or, the number of hours in a day—twenty-four—is also a con-
stant. Adding a constant to every score in a distribution does not affect
the outcome of the statistical calculations.
In REGRESSION ANALYSIS, constant is the point where the REGRESSION
LINE intersects the vertical axis.
see also LEVEL, INDEPENDENT VARIABLE, DEPENDENT VARIABLE, MOD-
ERATOR VARIABLE, INTERVENING VARIABLE, CATEGORICAL VARIABLE,

CONTINUOUS VARIABLE
 Porte 2010; Hatch & Farhady 1982; Brown 1988; Brown 1992

Venn diagram 701

variance
a statistical MEASURE OF VARIABILITY of a set of data around the MEAN.
Variance is calculated by summing the squared deviations of the data
values about the mean and then dividing the total by N if the data set is a
POPULATION or by N - 1 if the data set is from a SAMPLE. Variance is, in
fact, the squared value of the STANDARD DEVIATION. It is calculated
from the average squared deviation of each number from its mean. The
larger the variance, the more scattered are the observations on average.
Because of the mathematical manipulation needed to produce a variance
statistic variance is not often used by researchers to gain a sense of a dis-
tribution. In general, variance is used more as a step in the calculation of
other statistics (e.g., t-TEST, ANALYSIS OF VARIANCE) than as a stand-
alone statistic. But with a simple manipulation, the variance can be trans-
formed into the standard deviation.
see also SAMPLE VARIANCE, POPULATION VARIANCE

 Porte 2010; Urdan 2010

variance estimate
another term for SAMPLE VARIANCE

variance inflation factor
see MULTICOLLINEARITY

variate
see DISCRIMINANT FUNCTION ANALYSIS

Venn diagram
a graphical representation of the extent to which two or more quantities
or concepts are mutually inclusive and mutually exclusive. Venn diagram
is a system of representing the relationship between subsets of infor-
mation. Usually the totality is represented by a rectangle. Within that rec-
tangle are to be found circles which enclose particular subsets. The cir-
cles may not overlap, in which case there is no overlap between the sub-
sets. Alternatively, they may overlap totally or partially. The amount of
overlap is the amount of overlap between subsets. Figure V.1 shows ex-
amples of Venn diagrams.

Figure V.1. Examples of Venn Diagrams

702 verbal protocol

see also COEFFICIENT OF DETERMINATION

 Cramer & Howitt 2004; Sahai & Khurshid 2001

verbal protocol
another term for VERBAL REPORT

verbal report
also verbal protocol, verbal reporting
an introspective qualitative data collection method which consists of oral
records of an individual’s thought processes, provided by the individual
when thinking aloud either during or immediately after completing a
task. These tasks are usually relatively specific and bounded, e.g., read-
ing a short text. The verbalized thoughts of the participants are usually
free-form, since participants are not provided with preformatted choices
of answers. It is important to understand that verbal reports do not mirror
the thought processes. Verbal reports are not immediate revelations of
thought processes. They represent (a subset of) the information currently
available in short-term memory rather than the processes producing the
information. Cognitive processes are not directly manifest in protocols
but have to be inferred, just as in the case of other types of data.
Typically when researchers talk about verbal reporting they usually im-
ply two specific techniques: think-aloud protocol and retrospective pro-
tocol (sometimes called stimulated recall), differentiated by when the
data is collected. In a think-aloud protocol, the participants are given a
task to perform and during the performance of that task they are asked to
verbalize (i.e., to articulate) what their thought processes are. The re-
searcher’s role is merely to encourage that verbalization through prompt-
ing the participants with utterances such as ‘please keep telling me what
you are thinking’; ‘please keep thinking aloud if you can’. Think-aloud
implies no direct inspection of the mental state, but merely reportage. It
involves the concurrent vocalization of one’s inner speech without offer-
ing any analysis or explanation. Thus, respondents are asked to verbalize
only the thoughts that enter their attention while still in the respondents’
short-term memory. In this way, the procedure does not alter the se-
quence of thoughts mediating the completion of a task and can therefore
be accepted as valid data on thinking. The resulting verbal protocol is
recorded and then analyzed. It is clear that providing think-aloud com-
mentary is not a natural process and therefore participants need precise
instructions and some training before they can be expected to produce
useful data. They need to be told to focus on their task performance ra-
ther than on the think-aloud and they usually need to be reminded to
keep on talking while carrying out an activity (e.g., ‘What made you do
that?’, or ‘What are you thinking about now?’).

verbal report 703

The second type is a retrospective protocol or a stimulated recall in
which learners verbalize their thought processes immediately after they
have performed a task or mental operation. In such cases, the relevant in-
formation needs to be retrieved from long-term memory and thus the va-
lidity of retrospective protocols depends on the time interval between the
occurrence of a thought and its verbal report. For tasks with relatively
short response latencies (less than 5-10 seconds), subjects are able to
produce accurate recollections, but for cognitive processes of longer du-
ration, the difficulty of accurate recall of prior thoughts increases. In or-
der to help the respondents retrieve their relevant thoughts, some sort of
stimulus is used as a support for the recall (hence the term stimulated re-
call), such as watching the respondent’s own task performance on video,
listening to a recording of what the person has said, or showing the per-
son a written work that s/he has produced. Thus, the underlying idea is
that some tangible (visual or aural) reminder of an event will stimulate
recall to an extent that the respondents can retrieve and then verbalize
what was going on in their minds during the event.
There are several principles that should be adhered to in conducing ver-
bal reports. These principles include the following:

1) time intervening between mental operations and report is critical and
should be minimized as much as possible;

2) verbalization places additional cognitive demands on mental pro-
cessing that requires care in order to achieve insightful results;

3) verbal reports of mental processes should avoid the usual social con-
ventions of talking to someone;

4) there is a lot of information in introspective reports aside from the
words themselves. Researchers need to be aware of these parallel sig-
nal systems and be prepared to include them in their analyses;

5) verbal reports of automatic processes are not possible. Such processes
include visual and motor processes and low-attention, automatized
linguistic processes such as the social chat of native speakers.

Many criticisms have been leveled against verbal reports, the major one
being that it is highly unnatural and obtrusive to verbalize ones thoughts.
In addition, verbal reports do not elicit all of the cognitive processes in-
volved in an activity and thus are incomplete. Furthermore, the analysis
of verbal report data is subject to the idiosyncratic interpretations of the
researcher and hence may not be valid. For example, for second language
learners there is also the problem that students are asked to report on
their thought processes in a second language. Although few dispute the
limitations of verbal reports, at this point, the method is one of the few

704 verbal reporting

available means for finding out more about the thought processes of the
participants.

 Heigham & Croker 2009; Mckay 2006; Dörnyei 2007; Perry 2011; Gass & Mackey
2000; Brown & Rodgers 2002; Nunan 1992; Cohen & Macaro 2010

verbal reporting
another term for VERBAL REPORT

verificationalism
a view that a claim or belief is meaningful only if we can state the condi-
tions under which it could be verified or falsified. That is, we need to
state what EMPIRICAL RESEARCH would need to be done to show that the
claim or belief was true, or false. Verifiability and falsifiability are asso-
ciated with LOGICAL POSITIVISM.

 Fulcher & Davidson 2007

vertical axis
see BAR GRAPH

VIF
an abbreviation for VARIANCE INFLATION FACTOR

vignettes
a technique, used in STRUCTURED and IN-DEPTH INTERVIEWs as well as
FOCUS GROUPs, providing sketches of fictional (or fictionalized) scenari-
os. The respondent is then invited to imagine, drawing on his/her own
experience, and how the central character in the scenario will behave.
Vignettes thus collect situated data on group values, group beliefs, and
group norms of behavior. While in structured interviews respondents
must choose from a multiple-choice menu of possible answers to a vi-
gnette, as used in-depth interviews and focus groups, vignettes act as
stimulus to extended discussion of the scenario in question.

 Bloor & Wood 2006

volunteer sampling
a type of NON-PROBABILITY SAMPLING used when the researchers ask
people to volunteer to take part in their research. In cases where access is
difficult, the researcher may have to rely on volunteers, for example, per-
sonal friends, or friends of friends, or participants who reply to a news-
paper advertisement, or those who happen to be interested from a partic-
ular school, or those attending courses. This method has the obvious ad-
vantage of being easy and cheap but is highly problematic from the point
of view of obtaining an unbiased sample (see SAMPLING ERROR). Closely
related to CONVENIENCE SAMPLING is the use of volunteers as a sampling

volunteer sampling 705

strategy. They differ from a convenience sampling in that they are not
under any obligation to participate in the study, whereas the former usu-
ally consists of students who are required to be participants of a research
study as partial fulfillment of their courses. Volunteers are often paid for
their services, whereas participants in convenience samples are not.
When all attempts fail to find participants using other strategies, using
volunteers is often the only way researchers can go. However, research
has shown that using volunteers frequently leads to a sample that is not
representative of a target POPULATION. Findings have shown that in the
West, volunteers tend to be better educated, more motivated, more out-
going, higher in need achievement, and from a higher socioeconomic
level. It is pointed out that if any of these qualities could possibly impact
the variable(s) under investigation, you would have to treat the findings
of the study with some reservation.
see also QUOTA SAMPLING, DIMENSIONAL SAMPLING, PURPOSIVE SAM-
PLING, SNOWBALL SAMPLING, SEQUENTIAL SAMPLING, OPPORTUNISTIC

SAMPLING
 Perry 2011; Cohen et al. 2011

W

W
an abbreviation for KENDALL’S COEFFICIENT OF CONCORDANCE

Waller-Duncan t-test
a POST HOC TEST used for determining which of three or more MEANs
differ significantly in an ANALYSIS OF VARIANCE (ANOVA). This test is
based on the Bayesian t-value (also called Bayes’ Theorem) which de-
pends on the F RATIO for a ONE-WAY ANOVA, its DEGREES OF FREEDOM
and a measure of the relative seriousness of making a TYPE I ERROR ver-
sus a TYPE II ERROR. It can be used for groups of equal or unequal size.

 Cramer & Howitt 2004

web survey
another term for INTERNET SURVEY

Wechsler scales
a group of intelligence tests, including the Wechsler Adult Intelligence
Scale (WAIS), later revised (WAIS-R); the Wechsler Intelligence Scale
for Children (WISC), later revised (WISC-R); the Wechsler Preschool
and Primary Scale of Intelligence (WPPSI); and the Wechsler-Bellevue
Scale, no longer used, all of which emphasize performance and verbal
skills and give separate scores for subtests in vocabulary, arithmetic,
memory span, assembly of objects, and other abilities.

 Coaley 2010; Ary et al. 2010

weighted least squares
see LEAST SQUARES METHOD OF ANALYSIS

weighted mean
the MEAN of two or more groups which takes account of or weights the
size of the groups when the sizes of one or more of the groups differ. If
you end up with unequal sample sizes for reasons not related to the ef-
fects of your TREATMENTs, one solution is to equalize the groups by ran-
domly discarding the excess data from the larger groups. In a weighted
means analysis, each group mean is weighted according to the number of
subjects in the group. As a result, means with higher weightings (those
from larger groups) contribute more to the analysis than do means with
lower weights. When the size of the groups is the same, there is no need
to weight the group mean for size and the mean is simply the sum of the
means divided by the number of groups. When the size of the groups dif-
fers, the mean of each group is multiplied by its size to give the total or

Welch’s test 707

sum for that group. The sum of each group is added together to give the
overall or grand sum.

Groups Means Size Sum
1 4 10 40
2 5 20 100
3 9 40 360
Sum 18 500
Number 3 60 60
Mean 6 8.33

Table W.1. Weighted and Unweighted Mean of Three Groups

This grand sum is then divided by the total number of cases to give the
weighted mean. Take the means of the three groups in Table W.1. The
unweighted mean (an ARITHMETIC MEAN of a set of observations in
which no weights are assigned to them) of the three groups is 6, (4 + 5 +
9)/3 = 6). If the three groups were of the same size (say, 10 each), there
would be no need to weight them as size is a constant and the mean is 6,
(40 + 50 + 90)/30 = 6). If the sizes of the groups differ, as they do here,
the weighted mean is higher at 8.33 than the unweighted mean of 6 be-
cause the largest group has the highest mean.

 Cramer & Howitt 2004; Bordens & Abbott 2011

Welch’s analysis of variance test
another term for WELCH’S TEST

Welch’s statistic
another term for WELCH’S TEST

Welch’s test
also Welch’s t-test, Welch’s statistic, unequal variance t-test, Welch’s

analysis of variance test

a modified version of the INDEPENDENT SAMPLES t-TEST for which
EQUAL VARIANCEs are not assumed. The t-test assume that the underly-
ing population variances of the two groups are equal (since the variances
are pooled as part of the test); if they are not, then the Welch’s test
should be used, since this provides a direct means to adjust for the ine-
quality. In other words, when homogeneity of variance is not present but
the other requirements of an independent samples t-test are fulfilled then
Welch’s test is used. It should be also used whenever the SAMPLE SIZE is
small, or you wish to be conservative in the inferences that you draw. If
you wish to use the two-sample t-test, the best approach is to calculate

708 Welch’s t-test

the homogeneity of variance prior to any t-testing, and then decide
whether to use the two-sample t-test or the unequal variance t-test. When
the data are ordinal, none of these procedures as such are recommended,
as they have been specifically designed for interval- or ratio-level DE-
PENDENT VARIABLEs (see MANN-WHITNEY U TEST)

 Clark-Carter 2010; Boslaugh & Watters 2008; Lomax 2007

Welch’s t-test
another term for WELCH’S TEST

Wilcoxon-Mann-Whitney test
another term for MANN-WHITNEY U TEST

Wilcoxon matched-pairs signed-ranks test
also Wilcoxon signed-ranks test, signed-ranks test, Wilcoxon T test
a nonparametric alternative to the PAIRED-SAMPLES t-TEST which is used
with ordinal data. Wilcoxon matched-pairs signed-ranks test is used to
determine whether the two sets of data or scores obtained from the same
individuals (as in a pretest /posttest situation) are significantly different
from each other. It is used when you have one categorical INDEPENDENT
VARIABLE with two LEVELs (the same participants are measured two
times, e.g., Time 1, Time 2, or under two different conditions) and one
ordinal DEPENDENT VARIABLE. The differences between pairs of scores
are ranked in order of size, ignoring the sign or direction of those differ-
ences. The ranks of the differences with the same sign are added togeth-
er. If there are no differences between the scores of the two samples, the
sum of positive ranked differences should be similar to the sum of nega-
tive ranked difference. The bigger the differences between the positive
and negative ranked differences, the more likely the two sets of scores
differ significantly from each other.
The Wilcoxon signed ranks test considers both the magnitude of the dif-
ference scores and their direction, which makes it more powerful (i.e.,
(i.e., lower risk of TYPE II ERROR)) than the SIGN TEST. The Wilcoxon
can also be used in situations involving a MATCHED SUBJECT DESIGN,
where subjects are matched on specific criteria.
see also CONSERVATIVE TEST, LIBERAL TEST

 Mackey & Gass 2005; Cohen et al. 2011; Cramer & Howitt 2004; Sheskin 2011; Pa-
gano 2009; Hatch & Lazaraton 1991

Wilcoxon rank-sums test
another term for MANN-WHITNEY U TEST

Wilcoxon signed-ranks test
another term for WILCOXON MATCHED-PAIRS SIGNED-RANKS TEST

Winsorization 709

Wilcoxon test
another term for MANN-WHITNEY U TEST

Wilcoxon T test
another term for WILCOXON MATCHED-PAIRS SIGNED-RANKS TEST

Wilks’ Lambda
also lambda (λ), multivariate F
a test used in multivariate statistical procedures such as CANONICAL
CORRELATION, DISCRIMINANT FUNCTION ANALYSIS, and MULTIVARIATE
ANALYSIS OF VARIANCE to determine whether the MEANs of the groups
differ. Wilks’ lambda is a likelihood ratio statistic that tests the likeli-
hood of the data under the assumption of equal population mean vectors
for all groups against the likelihood under the assumption that population
mean vectors are identical to those of the sample mean vectors for the
different groups. It varies from 0 to 1. A lambda of 1 indicates that the
means of all the groups have the same value and so do not differ. Lamb-
das close to 0 signify that the means of the groups differ. It can be trans-
formed as a CHI-SQUARE or an F RATIO. It is the most widely used of
several such tests which include HOTELLING’S TRACE CRITERION, PIL-
LAI’S CRITERION and ROY’S GCR CRITERION.
When there are only two groups, the F ratios for Wilks’ lambda, Ho-
telling’s trace, Pillai’s criterion and Roy’s gcr criterion are the same.
When there are more than two groups, the F ratios for Wilks’ lambda,
Hotelling’s trace, and Pillai’s criterion may differ slightly. Pillai’s crite-
rion is said to be the most ROBUST when the assumption of the HOMOGE-
NEITY OF VARIANCE-COVARIANCE MATRIX is violated. In terms of avail-
ability Wilks’ lambda is the criterion of choice unless there is reason to
use Pillai's criterion.

 Cramer & Howitt 2004; Tabachnick & Fidell 2007; Hair et al. 2010

willful bias
see BIAS

w index
see EFFECT SIZE

Winsorization
a strategy which involves replacing a fixed number of OUTLIERs (i.e.,
extreme scores) with the score that is closest to them in the tail of the
DISTRIBUTION in which they occur. The rationale underlying Winsoriza-
tion is that the outliers may provide some useful information concerning
the magnitude of scores in the distribution, but at the same time may un-
duly influence the results of the analysis unless some adjustment is

710 Winsorized mean

made. For example, Winsorization, involves replacing a fixed number of
extreme scores with the score that is closest to them in the tail of the dis-
tribution in which they occur. As an example, in the distribution 0, 1, 18,
19, 23, 26, 26, 28, 33, 35, 98, 654 (which has a mean value of 80.08),
one can substitute a score of 18 for both the 0 and 1 (which are the two
lowest scores), and a score of 35 for the 98 and 654 (which are the two
highest scores). Thus, the Winsorized distribution will be: 18,18,18,19,
23, 26, 26, 28, 33, 35, 35, 35. The Winsorized mean of this distribution
will be 26.17. If the number of scores to be trimmed or Winsorized in the
right tail is the same as the number of scores to be trimmed or Winso-
rized in the left tail then the TRIMMING or Winsorization process is con-
sidered symmetric; otherwise the process is considered asymmetric.

 Sheskin 2011

Winsorized mean
see WINSORIZATION

Winsorized variance
the SAMPLE VARIANCE of the Winsorized values (see WINSORIZATION).
To compute the Winsorized variance, simply Winsorize the observations
as was done when computing the Winsorized mean. For example, when
computing a 20% Winsorized sample variance, more than 20% of the
observations must be changed in order to make the sample Winsorized
variance arbitrarily large.

 Wilcox 2003

within-groups ANCOVA
another term for REPEATED-MEASURES ANCOVA

within-groups ANOVA
another term for REPEATED-MEASURES ANOVA

within-groups design
another term for WITHIN-SUBJECTS DESIGN

within-groups factor
see BETWEEN-GROUPS FACTOR

within-groups independent variable
another term for WITHIN-GROUPS FACTOR

within-groups sum of squares
another term for SUM OF SQUARES WITHIN GROUPS

within-subjects design 711

within-groups variability
another term for ERROR VARIANCE

within-groups variance
another term for ERROR VARIANCE

within-participants factor
another term for WITHIN-GROUPS FACTOR

within-subjects ANCOVA
another term for REPEATED-MEASURES ANCOVA

within-subjects ANOVA
another term for REPEATED-MEASURES ANOVA

within-subjects design

also within-groups design, repeated-measures design, related design,
related subjects design, correlated subjects design, correlated samples
design, correlated measures design, dependent samples design

an EXPERIMENTAL DESIGN in which the same participants in the research
receive or experience all of the LEVELs or conditions of the INDEPEND-
ENT VARIABLE (IV) (i.e., TREATMENT). Generally, there are two structur-
al forms that this repeated measurement can take: (a) it can mark the pas-
sage of time, or (b) it can be unrelated to the time of the measurement
but simply indicate the conditions under which participants were meas-
ured (e.g., Condition 1, Condition 2, Condition 3). Although these two
forms of within-subjects designs do not affect either the fundamental na-
ture of the design or the data analysis, they do imply different research
data collection procedures to create the measurement opportunities. A
within-subjects variable marking the passage of time is one in which the
first level of the variable is measured at one point in time, the next level
of the variable is measured at a later point in time, the third level of the
variable is assessed at a still later period of time, and so on. The most
commonly cited example of a time-related within-subjects design is ONE-
GROUP PRETEST-POSTTEST DESIGN.
A within-subjects variable does not have to be time related to measure
participants under all of the research conditions. For example, if the sub-
jects receive instruction in the three different teaching methods and
scores are recorded after each type of instruction, this is a within-groups
variable since the same subjects’ scores are in all levels of the variable.
The main advantage of a within-subjects design is that it eliminates the
problem of differences in the groups that can confound the findings in
BETWEEN-SUBJECTS DESIGNs. Another advantage of within-subjects de-
signs is that they can be conducted with fewer subjects. Further, since the

712 within-subjects factor

same individual is measured repeatedly, many EXTRANEOUS VARIABLEs
can be held constant. However, because each participant receives all lev-
els of the IV, the possibility arises that the order in which the levels are
received affects participants’ behavior. To guard against the possibility
of ORDER EFFECTs, researchers use COUNTERBALANCED DESIGN. Alter-
natively, a LATIN SQUARE DESIGN may be used to control for order ef-
fects, thus making it easier to reduce the effects of ERROR VARIANCE.
Within-subjects designs can theoretically contain any number of within-
subjects IVs. Thus, a one-way within-subjects design contains only a
single IV, a two-way within-subjects design contains two IVs, a three-
way within-subjects design contains three IVs, and so on. A within-
subjects design with more than one IV is called a within-subjects facto-
rial design. The most appropriate statistical analysis for a within-subjects
design is a repeated-measures ANOVA, which takes into account the fact
that the measures are related.
A within-subjects design is sometimes categorized as a RANDOMIZED-
BLOCKS DESIGN because within each block the same subject is matched
with himself by virtue of serving under all of the experimental condi-
tions. Within-subjects designs are still univariate studies in that there is
only one DV in the design.

 Cramer & Howitt 2004; Leary 2011; Sheskin 2011

within-subjects factor
see BETWEEN-GROUPS FACTOR

within-subjects factorial ANOVA
another term for REPEATED-MEASURES FACTORIAL ANOVA

within-subjects factorial design
see WITHIN-SUBJECTS DESIGN

within-subjects independent variable
another term for WITHIN-GROUPS FACTOR

within-subjects t-test
another term for PAIRED-SAMPLES t-TEST

within-subjects variability
another term for ERROR VARIANCE

within-subjects variance
another term for ERROR VARIANCE

WWW survey 713

WMW test
an abbreviation for MANN-WHITNEY U TEST

WWW survey
another term for INTERNET SURVEY

X

X axis
another term for HORIZONTAL AXIS
an abbreviation for MEAN

x intercept
see INTERCEPT

X variable
another term for INDEPENDENT VARIABLE

714

Y

Y axis
another term for VERTICAL AXIS

Yule’s Q
a measure of association for a 2 × 2 (two-by-two) CONTINGENCY TABLE.
Yule’s Q is actually a special case of GOODMAN-KRUSKAL’S GAMMA
which can be used for both ordered and unordered tables.

 Sahai & Khurshid 2001

Y variable
another term for DEPENDENT VARIABLE

715

Z

z distribution
a DISTRIBUTION produced by transforming all RAW SCOREs in the data
into Z SCOREs. By envisioning such a z distribution, you can see how z
scores form a standard way to communicate relative standing. The z
score of 0 indicates that the raw score equals the MEAN. A ‘+’ indicates
that the z score (and raw score) is above and graphed to the right of the
mean. Positive z scores become increasingly larger as we proceed farther
to the right. Larger positive z scores (and their corresponding raw scores)
occur less frequently. Conversely, a ‘-’ indicates that the z score (and raw
score) is below and graphed to the left of the mean. Negative z scores be-
come increasingly larger as we proceed farther to the left. Larger nega-
tive z scores (and their corresponding raw scores) occur less frequently.
However, most of the z scores are between +3 and -3.
Three important characteristics of any z distribution are the following:

1) A z distribution has the same shape as the raw score distribution. On-
ly when the underlying raw score distribution is normal will its z dis-
tribution be normal.

2) The mean of any z distribution is 0. Whatever the mean of the raw
scores is, it transforms into a z score of 0.

3) The STANDARD DEVIATION of any z distribution is 1. Whether the
standard deviation in the raw scores is 10 or 100, it is still one stand-
ard deviation, which transforms into an amount in z scores of 1.

 Heiman 2011

zero hypothesis
another term for NULL HYPOTHESIS

z score

also z value, z statistics

a type of STANDARD SCORE that indicates how many STANDARD DEVIA-
TION units a given score is above or below the MEAN for that group. The
z scores create a scale with a mean of zero and a standard deviation of
one. The shape of the z score distribution is the same as that of the RAW
SCOREs used to calculate the z scores. The theoretical range of the z
scores is ±∞ (i.e., plus/minus infinity). Since the area above a z score of
+3 or below a z score of -3 includes only .13 percent of the cases, for
practical purposes most people only use the scale of -3 to +3. To convert
a raw score to a z score, the raw score as well as the group mean and
standard deviation are used. The conversion formula is:

z score 717

z score = (score ‐ mean)/standard deviation 

Any raw score can be converted to a z score if the mean of a distribution
and the standard deviation are known. If Bill got a score of 75 on a test
with a mean of 65 and a standard deviation of 5, his z score would be as
follows:

z score = (75 ‐ 65)/5 = 10/5 = +2 

The z score is used to describe a particular participant’s score relative to
the rest of the data. A participant’s z score indicates how far from the
mean in terms of standard deviations the participant’s score varies. With
z scores we can easily determine the underlying raw score’s location in a
distribution, its relative and simple frequency, and its PERCENTILE. All of
this helps us to know whether the individual’s raw score was relatively
good, bad, or in-between. As shown in Figure Z.1, a z score always has
two components: (1) either a positive or negative sign which indicates
whether the raw score is above or below the mean, and (2) the absolute
value of the z score which indicates how far the score lies from the mean
when measured in standard deviations.

34.13% 34.13%

13.59% 13.59%

.13% .13%

2.15% 2.15% +3 z z scores
+3 SD
‐3 z ‐2 z ‐1 z 0 +1 z +2 z
‐3 SD ‐2 SD  ‐1 SD Mean +1 SD +2 SD

Figure Z.1. Scores Associated with the Normal Curve

For example, if we find that a participant has a z score of -1, we know
that his/her score is 1 standard deviation below the mean. By referring to
Figure Z.1, we can see that only about 16% of the other participants
scored lower than this person. Similarly, a z score of +2.9 indicates a
score nearly 3 standard deviations above the mean—one that is in the
uppermost ranges of the distribution. Sometimes we know a z score and
want to find the corresponding raw score. We multiplied the z score
times standard deviation and then added the mean.

718 z statistics

As stated above, z scores may be either positive or negative numbers. (If
you add all the z scores in a distribution, the answer will be zero). In
addition, they often contain decimal points (a z score might be 1.8
standard deviations from the mean rather than just 1 or 2). This makes
for error in reporting. T SCOREs seem easier to interpret since they are
always positive numbers and contain no fractions.
see NORMAL DISTRIBUTION

 Urdan 2010; Leary 2011; Ravid 2011; Heiman 2011; Hatch & Lazaraton 1991

z statistics
another term for Z SCORE

z test for two dependent samples
a statistical test used when a researcher wants to compare the MEANs of
two DEPENDENT SAMPLES, and happens to know the VARIANCEs of the
two underlying POPULATIONs. In such a case, the z test for two depend-
ent samples should be employed to evaluate the data instead of the t-
TEST for two dependent samples. The z test for two dependent samples
assumes that the two samples are randomly selected from populations
that have NORMAL DISTRIBUTIONs. The effect of violation of the NOR-
MALITY assumption on the test statistic decreases as the size of the sam-
ple employed in an experiment increases. The HOMOGENEITY OF VARI-
ANCE assumption is not an assumption of the z test for two dependent
samples.
see also ONE-SAMPLE Z TEST, Z TEST FOR TWO INDEPENDENT PROPOR-

TIONS
 Sheskin 2011

z test for two independent proportions
an alternative large sample procedure for evaluating a 2 × 2 (two-by-two)
CONTINGENCY TABLE. In fact, the z test for two independent proportions
yields a result that is equivalent to that obtained with the CHI-SQUARE
TEST OF INDEPENDENCE. If both the z test for two independent propor-
tions (which is based on the NORMAL DISTRIBUTION) and chi-square test
of independence are applied to the same set of data, the square of the z
value obtained for the former test will equal the chi-square value ob-
tained for the latter test.
see also ONE-SAMPLE Z TEST, Z TEST FOR TWO DEPENDENT SAMPLES

 Sheskin 2011

z value
another term for Z SCORE



Bibliography

Adair, G. (1984). The Hawthorne effect: A reconsideration of the
methodological artifact. Journal of Applied Psychology, 69 (2), 334-345.

Airasian, P. W. (2001). Classroom assessment: Concepts and applications
(4th ed.). New York: McGraw-Hill.

Alastair, P. (2001). Critical applied linguistics: A critical introduction.
Lawrence Erlbaum Associates.

Alaszewski, A. (2006). Using diaries for social research. London: Sage.
Allison, D. (2002). Approaching English language research. Singapore:

Singapore University Press.
Allwright, D. (1988). Observation in the language classroom. New York:

Longman.
Allwright, D. (2003). Exploratory practice: Rethinking practitioner research

in language teaching. Language Teaching Research, 7, 113–141.
Allwright, D. (2005). Developing principles for practitioner research: The

case for exploratory practice. The Modern Language Journal, 89(3),
353–366.
Allwright, D., & Bailey, K. M. (1991). Focus on the language classroom:
An introduction to classroom research for language researchers.
Cambridge, UK: Cambridge University Press.
Allwright, D., & Hanks, J. (2009). The developing language learner.
Basingstoke: Palgrave Macmillan.
Altrichter, H., Feldman, A., Posch, P., & Somekh, B. (2007). Teachers
investigate their work. London: Routledge.
American Educational Research Association. (2004). Encyclopedia of
educational research (7th ed.). New York: Macmillan.
American Psychological Association (APA). (2010). Publication manual of
the American Psychological Association (6th ed.). Washington, DC:
Author.
Anderson, G. & Arsenault, N. (2001). Fundamentals of educational
research. London: Routledge.
Andrews, M., Squire, C., & Tamboukou, M. (Eds.). (2008). Doing narrative
research in the social sciences. London: Sage.
Anfara, V. A., & Mertz, N. T. (2006). Theoretical frameworks in qualitative
research. Thousand Oaks, CA: Sage.
Aron, A., & Aron, E. (1999). Statistics for psychology. Upper Saddle River,
NJ: Prentice Hall.
Aronson, E., P. Ellsworth, J. Carlsmith, & M. Gonzales. (1990). Methods of
research in social psychology. New York: McGraw Hill.

720 Bibliography

Ary, D., Jacobs, L. C., Sorensen, C. K. (2010). Introduction to research in
education (8th ed.). Belmont, CA: Wadsworth.

Atkinson, R. (1998). The life story interview. Thousand Oaks, CA: Sage.
Baayen, R. H. (2008). Analyzing linguistic data: A practical introduction to

statistics using R. Cambridge, UK: Cambridge University Press.
Babbie, E. (2011). The basics of social research (5th ed.). Belmont, CA:

Wadsworth.
Bachman, L. F. (1990). Fundamental considerations in language testing.

Oxford, UK: Oxford University Press.
Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice:

Designing and developing useful language tests. Oxford, UK: Oxford
University Press.
Bachman, L. F. (2004). Statistical analyses for language assessment.
Cambridge, UK: Cambridge University Press.
Bailey, K. M. (1991). Diary studies of classroom language learning: The
doubting game and the believing game. In E. Sadtono (Ed.), Language
acquisition and the second/foreign language classroom (pp. 60-102).
RELC Anthology Series 28, Singapore: RELC.
Bailey, K. M. (1998). Learning about language assessment: Dilemmas,
decisions, and directions. Boston, MA: Heinle & Heinle.
Bailey, K. M. (2001). Observation. In R. Carter & D. Nunan (Eds.), The
Cambridge guide to teaching English to speakers of other languages (pp.
114-119). Cambridge, UK: Cambridge University Press.
Bailey, K. M., & Ochsner, R. (1983). A methodological review of the diary
studies: Windmill tilting or social science? In K. M. Bailey, M. H. Long
& S. Peck (Eds.), Second language acquisition studies (pp. 188-198).
Rowley, Mass.: Newbury House.
Bakeman, R. & Robinson, B. F. (2005). Understanding statistics in the
behavioral sciences. Mahwah, NJ: Lawrence Erlbaum Associates.
Baker, C. (1997). Survey methods in researching language and education. In
N. H. Hornberger & D. Corson (Eds.), Research methods in language
and education (pp. 35-46). Encyclopedia of Language and Education.
(Vol. 8). Dordrecht: Kluwer.
Baker, P. (2006). Using corpora in discourse analysis. London: Continuum.
Bartels, N. (2005). Applied linguistics and language teacher education.
New York: Springer.
Bassey, M. (1999). Case study research in educational settings.
Buckingham, UK: Open University Press.
Bell, J. S. (2002). Narrative inquiry: More than just telling stories. TESOL
Quarterly, 36, 207-218.
Benson, P. (2004). (Auto)biography and learner diversity. In P. Benson &
D. Nunan (Eds.), Learners’ stories: Difference and diversity in language
learning (pp. 4-21). Cambridge, UK: Cambridge University Press.

Bibliography 721

Benson, P, Chik, A., Gao, X., Huang, J. & Wang, W. (2009). Qualitative
research in language teaching and learning journals, 1997–2006. The
Modern Language Journal, 93(1), 79–90.

Benton, T., & Craib, I. (2001). Philosophy of social science: The
philosophical foundations of social thought. London: Palgrave.

Beretta, A. (1991). Theory construction in SLA: Complementarity and
opposition. Studies in Second Language Acquisition, 13, 493-511.

Berg, B. L. (2009). Qualitative research methods for the social sciences (7th
ed.). Boston: Allyn & Bacon.

Bernard, H. R. (1995). Research methods in anthropology: Qualitative and
quantitative approaches. Walnut Creek, CA: AltaMira.

Berns, P. M. (2010). Concise encyclopedia of applied linguistics. Elsevier
Ltd.

Bertaux, D. (Ed.). (1981). Biography and society: The life history approach
in the social sciences. Beverley Hills, CA: Sage.

Best, J. W. & Kahn, J. V. (2006). Research in education (10th ed.). New
York: Pearson Education Inc.

Bhatia, V. K., Flowerdew, J. & Jones, R. H. (Eds.). (2008). Advances in
discourse studies. London: Routledge.

Biber, D., & Conrad, S. (2001). Quantitative corpus-based research: Much
more than bean counting. TESOL Quarterly, 35 (2), 331-336.

Biber, D., Conrad, S., & Reppen, R. (1998). Corpus linguistics:
Investigating language structure and use. Cambridge, UK: Cambridge
University Press.

Bickman, L & Rog, D. J. (Eds.). (1998). Handbook of applied social
research methods. Thousand Oaks, CA: Sage.

Black, T. R. (1999). Doing quantitative in the social sciences. London:
Sage.

Bliss, J., Monk, M. & Ogborn, J. (1983). Qualitative data analysis for
educational research. London: Croom Helm.

Blaxter, L., C. Hughes, & M. Tight. (1996). How to research. Buckingham,
UK: Open University Press.

Block, D. (1995). Social constraints on interviews. Prospect, 10 (3), 35-48.
Block, D. (1996). Not so fast: Some thoughts on theory culling, relativism,

accepted findings and the heart and soul of SLA. Applied Linguistics, 17
(1), 63-83.
Block, D. (2000). Problematizing interview data: Voices in the mind’s
machine? TESOL Quarterly, 34, 757-763.
Block, D. (2003). The social turn in second language acquisition.
Edinburgh: Edinburgh University Press.
Bloor, M. & Wood, F. (2006). Keywords in qualitative methods. Thousand
Oaks, CA: Sage.

722 Bibliography

Blot, R. K. (1991). The role of hypothesis testing in qualitative research: A
second researcher comments. TESOL Quarterly, 25 (1), 202-205.

Bogdan, R. C., & Biklen, S. (2007). Qualitative research for education: An
introduction to theories and methods. Boston: Allyn & Bacon.

Bordens, K. S. & Abbott, B. B. (2011). Research design and methods: A
process approach (8th ed.). New York: McGraw-Hill.

Borenstein M., Hedges, L. V., Higgins, J.P.T., & Rothstein, H.R. (2009).
Introduction to meta-analysis. UK: John Wiley & Sons.

Borg, I. Groenen, P. J. F. (2005). Modern multidimensional scaling: Theory
and applications (2nd ed.). New York: Springer.

Borg, S. (1998). Teachers’ pedagogical systems and grammar teaching: A
qualitative study. TESOL Quarterly, 32(1), 9–38.

Borg, S. (2006a). Classroom research in English language teaching in
Oman. Muscat: Sultanate of Oman, Ministry of Education.

Borg, S. (Ed.) (2006b). Language teacher research in Europe. Alexandria,
VA: TESOL.

Borg, W. R. (1989). Educational research: an introduction. New York:
Addison Wesley Longman.

Boslaugh, S. & Watters, P. A. (2008). Statistics in a Nutshell. CA: O’Reilly
Media.

Bowles, H., & Seedhouse, P., (2007a). Conversation analysis and LSP.
Berlin: Peter Lang.

Box, G. E. P., Hunter, J. S. & Hunter, W. G. (2005). Statistics for
experimenters: Design, innovation, and discovery (2nd ed.). NJ: John
Wiley & Sons.

Brace, N., Kemp, R., & Snelgar, R. (2003). SPSS for psychologists: A guide
to data analysis using SPSS for Windows. Mahwah, NJ: Lawrence
Erlbaum Associates.

Bracht, G. H., & Glass, G. V. (1998). The external validity of experiments.
American Educational Research Journal, 5, 437-474.

Brewerton, P. & Millward, L. (2001). Organizational research methods.
Thousand Oaks, CA: Sage.

Brown, D. B. (2007). Principles of language learning and teaching (5th
ed.). New York: Pearson Education.

Brown, D. B. (2010). Language assessment: Principles and classroom
practices (2nd ed.). New York: Pearson Education.

Brown, J. D. (1988). Understanding research in second Language learning.
Cambridge, UK: Cambridge University Press.

Brown, J. D. (1990). The use of multiple t-tests in language research.
TESOL Quarterly, 24 (24), 770-773.

Brown, J. D. (1991). Statistics as a foreign language—Part 1: What to look
for in reading statistical language studies. TESOL Quarterly, 25 (4), 569-
585.

Bibliography 723

Brown, J. D. (1992). Statistics as a foreign language—Part 2: More things to
consider in reading statistical language studies. TESOL Quarterly, 26 (4),
629-664.

Brown, J. D. (2001). Using surveys in language programs. Cambridge, UK:
Cambridge University Press.

Brown, J. D., & Rodgers, T. S. (2002). Doing second language research.
Oxford, UK: Oxford University Press.

Brown, J. D. (2004a). Research methods for applied linguistics: Scope,
characteristics, and standards. In A. Davis & C. Elder (Eds.), The
handbook of applied linguistics. Oxford, UK: Blackwell.

Brown, J. D. (2004b). Resources on quantitative/statistical research for
applied linguists. Second Language Research, 20 (4), 372-393.

Brown, J. D. (2005). Testing in language programs: A comprehensive guide
to English Language Assessment. New York: McGraw Hill.

Brown, J. D. (2009). Open-Response Items in Questionnaires. In Juanita
Heigham & Robert A. Croker (Eds.), Qualitative research in applied
linguistics (pp. 200-219). New York: Palgrave Macmillan.

Bruner, J. (1991). The narrative construction of reality. Critical Inquiry, 8,
1-21.

Bryant, A., & Charmaz, K. (Eds.). (2007). The Sage handbook of grounded
theory. London: Sage.

Bryman, A. & Cramer, D. (1994). Quantitative data analysis for social
scientists. London: Routledge.

Bryant, M. T. (2004). The portable dissertation advisor. Thousand Oaks,
CA: Corwin.

Buchanan, D. & Bryman, A. (2009).The SAGE handbook of organizational
research methods. London: Sage.

Burns, A. (2010). Doing action research in English language teaching: A
guide for practitioners. New York: Routledge

Butler, C. (1985). Statistics in linguistics. Oxford, UK: Basil Blackwell.
Button, G. (Ed.). (1991). Ethnomethodology and the human sciences. New

York: Cambridge University Press.
Byram, M. (Ed.). (2000). Routledge encyclopedia of language teaching and

learning. London: Routledge.
Cameron, D. (2000). Difficult subjects. Critical Quarterly, 42 (4), 89-94.
Cameron, D. (2001). Working with spoken discourse. Thousand Oaks, CA:

Sage.
Cameron, D., Frazer, E., Harvey, P., Rampton, M. B. H., & Richardson, K.

(Eds.). (1992). Researching language: Issues of power and method.
London: Routledge.
Campbell, D., & J. Stanley. (1963). Experimental and quasi-experimental
designs for research. Chicago: Rand McNally.

724 Bibliography

Campbell, D. T., Stanley, J. C., & Gage, N. L. (1981). Experimental and
quasi-experimental designs for research. Boston: Houghton Mifflin.

Canagarajah, A. S. (1996). From critical research practice to critical
research reporting. TESOL Quarterly, 30 (3), 321-330.

Carmines, E., & Zeller, R. (1979). Reliability and validity assessment.
Beverly Hills, CA: Sage.

Carspecken, P. F. & Apple, M. (1992).Critical qualitative research: theory,
methodology, and practice. In M. LeCompte, W. L. Millroy & J. Preissle
(Eds.), The handbook of wualitative research in education. London:
Academic Press, 507-53.

Carter, R., & Nunan. D. (Eds.). (2001). The Cambridge guide to teaching
English to speakers of other languages. Cambridge, UK: Cambridge
University Press.

Catford, J. C. (1998). Language Learning and applied linguistics: A
historical sketch. Language Learning, 48 (4), 465-496.

Cazden, C. B. (2001). Classroom discourse: The language of teaching and
learning. Portsmouth, NH: Heinemann.

Cazden, C. B., & Beck, S.W. (2003). Classroom Discourse. In A.C.
Graesser, M.A. Gernsbacher, S. Goldman (Eds.), Handbook of discourse
processes (p. 165-198). Mahwah, NJ: Lawrence Erlbaum Associates.

Celce-Murcia, M., & Olshtain, E. (2000). Discourse and context in
language teaching: A guide for language teachers. Cambridge, UK:
Cambridge University Press.

Chamberlayne, P., Bornat, J., & Wengraf, T. (Eds.). (2000). The turn to
biographical methods in social science: Comparative issues and
examples. London: Routledge.

Charmaz, K. (2006). Constructing grounded theory: A practical guide
through qualitative analysis. London: Sage.

Chaudron, C. (1988). Second language classrooms: Research on teaching
and learning. Cambridge, UK: Cambridge University Press.

Cho, J., & Trent, A. (2006). Validity in qualitative research revisited.
Qualitative Research, 6, 319-340.

Charmaz, K. (2006). Constructing grounded theory: A practical guide
through qualitative analysis. London: Sage.

Christ, T. (2007). A recursive approach to mixed methods research in a
longitudinal study of postsecondary education disability support services.
Journal of Mixed Methods Research, 1(3), 226-241.

Christie, F. (2002). Classroom discourse analysis: A functional perspective.
London: Continuum.

Clandinin, D. J. (Ed.). (2006). Handbook of narrative inquiry: Mapping a
methodology. Thousand Oaks, CA: Sage.

Clark-Carter, D. (2010). Quantitative psychological research: The complete
student’s companion (3rd). New York: Psychology Press.

Bibliography 725

Coaley, K. (2010). An introduction to psychological assessment and
psychometrics. London: Sage.

Cochran,W. (1981). Statistical analysis in psychology and education. New
York:McGraw Hill.

Coffey, A., & Atkinson, P. (1996). Making sense of qualitative data:
Complementary research strategies. Thousand Oaks, CA: Sage.

Cohen, A. D. & Macaro, E. (2010). Research methods in second language
acquisition. In E. Macaro (Ed.), Continuum companion to second
language acquisition (pp. 107-133). London: Continuum.

Cohen, J. (1988). Statistical power and analysis for the behavioral sciences
(2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple
regression/correlation analysis for the behavioral sciences. Mahwah,
NJ: Lawrence Erlbaum Associates.

Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in
education (7th ed.). London: Routledge.

Cohen, R. J., & Swerdlik, M. E. (2010). Psychological testing and
assessment: An introduction to tests and measures (7th ed.). Boston:
McGraw-Hill.

Connolly, P. (2007). Quantitative data analysis in education: A critical
introduction using SPSS. London: Routledge.

Connor, U. (1994). Text Analysis. TESOL Quarterly, 28 (4), 682-684.
Conover, W. J. (1980). Practical nonparametric statistics (2nd ed.). New

York: Wiley.
Conrad, S. (1999). The importance of corpus-based research for language

teachers. System, 27, 1-18.
Cook, G. (2003). Applied Linguistics. Oxford, UK: Oxford University

Press.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimental: Design and

analysis issues for field settings. Chicago: Rand McNally.
Coolican, H. (2009). Research methods and statistics in psychology (5th

ed.). London: Hodder Education.
Cooper, H. M. (1998). Synthesizing research: A guide for literature reviews

(3rd ed.). Thousand Oaks, CA: Sage.
Cooper, H. M. (2006). Research questions and research designs. In P.

Alexander & P. Winne (Eds.), Handbook of educational psychology (2nd
ed.). Mahwah, NJ: Lawrence Erlbaum Associates.
Corbin, J., & Strauss, A. (2008). Basics of qualitative research (3rd ed.).
Thousand Oaks, CA: Sage.
Corson, D. (1997). Critical realism: An emancipatory philosophy for
applied linguistics? Applied Linguistics, 18, 166-168.
Corson, D. (Ed.). (1997). Encyclopedia of language and education (vols. 1-
8). Amsterdam: Kluwer.

726 Bibliography

Cortazzi, M. (1993). Narrative analysis. London: Falmer Press.
Coulon, A. (2000). Ethnomethodology. London: Sage.
Cox, T. F. & Cox. M.A.A. (2001). Multidimensional scaling. Boca Raton,

FL: Chapman & Hall/CRC.
Cramer, D. & Howitt, D. (2004). The SAGE dictionary of statistics: A

practical resource for students in the social sciences. Thousand Oaks,
CA: Sage.
Crano, W. D. & M. B. Brewer (2002). Principles and methods of social
research. Mahwah, NJ: Lawrence Erlbaum Associates.
Creswell, J. W. (2007). Qualitative inquiry and research design: Choosing
among five approaches (2nd ed.). Thousand Oaks, CA: Sage.
Creswell, J. W. (2008). Educational research: Planning, conducting, and
evaluating quantitative and qualitative approaches to research (3rd ed.).
Upper Saddle River, NJ: Merrill/Pearson Education.
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and
mixed method approaches. (3rd ed.). Thousand Oaks, CA: Sage.
Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting
mixed methods research. Thousand Oaks, CA: Sage.
Crookes, G. (1993). Action research for second language instructors: Going
beyond instructor research. Applied Linguistics, 14 (2), 130-144.
Cumming, A. (Ed.). (1994). Alternatives in TESOL research: Descriptive,
interpretive, and ideological orientations. TESOL Quarterly, 28 (4), 673-
703.
Cummins, J., & Davison, C. (Eds.). (2007). The international handbook of
English language teaching (Vol. 1 & 2). Norwell, MA: Springer.
Dalgaard, P. (2002). Introductory statistics with R. New York: Springer-
Verlag.
Dana, N. F., & Yendol-Silva, D. (2003). The reflective educator’s guide to
classroom research: Learning to teach and teaching to learn through
practitioner inquiry. Thousand Oaks, CA: Corwin.
Davies, A., & Elder, C. (Eds.). (2004). The handbook of applied linguistics.
Oxford, UK: Blackwell.
Davis, K. A. (1992). Validity and reliability in qualitative research: Another
researcher comments. TESOL Quarterly, 26 (3), 605-608.
Davis, K. A. (1995). Qualitative theory and methods in applied linguistics
research. TESOL Quarterly, 29, 427-453.
Davis, K. A., & Henze, R. C. (1998). Applying ethnographic perspectives to
issues in cross-cultural pragmatics. Journal of Pragmatics, 30, 399-419.
Davis, S. F. (2003). Handbook of research methods in experimental
psychology. Oxford, UK: Blackwell.
Dawson, C. (2007). Practical research methods: A user-friendly guide to
mastering research techniques and projects. UK: HowtoBooks.

Bibliography 727

deMarrais, K. & Lapan, S.D. (Eds.). (2004). Foundations for research:
Methods of inquiry in education and the social sciences. Mahwah, NJ:
Lawrence Erlbaum Associates.

Denzin, N. K. (1970a). Strategies of multiple triangulation. In N. Denzin
(Ed.), The research act in sociology: A theoretical introduction to
sociological method. London: Butterworth, 297-313.

Denzin, N. K. (1970b). The research act in sociology: A theoretical
introduction to sociological methods. London: Butterworth.

Denzin, N. K. (1989a). Interpretive biography. Newbury Park, CA: Sage.
Denzin, N. K. (1989b). The research act (3rd ed.). Englewood Cliffs, NJ:

Prentice Hall.
Denzin, N. K. (1997). Triangulation in educational research. In J. P. Keeves

(Ed.), Educational research, methodology and measurement: An
international handbook (2nd ed.). Oxford, UK: Elsevier Science, 318-22.
Denzin, N. K. (1999). Biographical research methods. In J. P. Keeves & G.
Lakomski (Eds.), Issues in educational research. Oxford, UK: Elsevier
Science, 92-102.
Denzin, N. K., & Lincoln, Y. S. (2008). Strategies of qualitative inquiry
(3rd ed.). Thousand Oaks, CA: Sage.
Denzin, N. K., & Lincoln, Y. S. (Eds.). (2011). The Sage handbook of
qualitative research (4th ed.). Thousand Oaks, CA: Sage.
DeVaus, D. (2002). Conducting surveys using the Internet. Thousand Oaks,
CA: Sage.
Dey, I. (1993). Grounding grounded theory. San Diego, CA: Academic
Press.
Deyle, D. L., Hess, G. & LeCompte, M. L. (1992). Approaching ethical
issues for qualitative researchers in education. In M. LeCompte, W. L.
Millroy & J. Preissle (Eds.), The handbook of qualitative research in
education. London: Academic Press, 597-642.
Dickinson Gibbons. J. (1994). Non-parametric statistics: An introduction.
Thousand Oaks: Sage.
Dillman, D. A. (2000). Mail and Internet surveys: The tailored design
method (2nd ed.). New York: Wiley.
Dobbert, M. L. & Kurth-Schai, R. (1992). Systematic ethnography: toward
an evolutionary science of education and culture. In M. LeCompte, W. L.
Millroy and J. Preissle (Eds.), The handbook of qualitative research in
education. London: Academic Press, 93-160.
Dochartaigh, N. O. (2002). The internet research handbook. London: Sage.
Dooley, D. (2001). Social research methods (4th ed.). Englewood Cliffs,
NJ: Prentice Hall.
Dörnyei, Z. (2001). Teaching and researching motivation. Harlow:
Longman.

728 Bibliography

Dörnyei, Z. (2003). Questionnaires in second language research:
Constructing, administering, and processing. Mahwah, NJ: Lawrence
Erlbaum Associates.

Dörnyei, Z. (2007). Research in applied linguistics: Quantitative,
qualitative, and mixed methodologies. Oxford, UK: Oxford University
Press.

Dörnyei, Z. & Taguchi, T. (2010). Questionnaires in second language
research: Constructing, administering, and processing. London:
Routledge.

Doughty, C. J., & Long, M. H. (Eds.). (2003). The handbook of second
language acquisition. Oxford, UK: Blackwell.

Duff, P. A. (1995). An ethnography of communication in immersion
classrooms in Hungary. TESOL Quarterly, 29 (3), 505-536.

Duff, P. A. (2002). Research approaches in applied linguistics. In R. Kaplan
(Ed.), The Oxford handbook of applied linguistics (pp. 13-23). Oxford,
UK: Oxford University Press.

Duff, P. A., & Bailey, K. M. (Eds.). (2001). Identifying research priorities:
Themes and directions for the TESOL International Research
Foundation. TESOL Quarterly, 35 (4), 595-616.

Edge, J. (2001). Action research. Alexandria, VA: TESOL.
Edge, J., & Richards, K. (1998). May I see your warrant, please? Justifying

outcomes in qualitative research. Applied Linguistics, 19 (3), 334-356.
Egbert, J. L., & Petrie, G. M. (Eds.). (2005). CALL research perspectives.

Mahwah, NJ: Lawrence Erlbaum Associates.
Eisenhart, M. A. & Howe, K. R. (1992). Validity in educational research. In

M. D. LeCompte, W. L. Millroy & J. Preissle (Eds.), The handbook of
qualitative studies in education. New York: Academic Press, 643-680.
Eisner, E. (1998). The enlightened eye: Qualitative inquiry and the
enhancement of educational practice. New York: Macmillan.
Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis. Cambridge,
MA: The MIT Press.
Evans, A. N., & Rooney, B. F. (2008). Methods in psychological research.
Thousand Oaks, CA: Sage.
Everitt, B. S (2001). Statistics for psychologists: an intermediate course.
Mahwah, NJ: Lawrence Erlbaum Associates.
Everitt, B. S., & Dunn, G. (2001). Applied multivariate data analysis (2nd
ed.). New York: Hodder Arnold.
Everitt, B. S. & Howell, D. C. (Eds.). (2005). Encyclopedia of statistics in
behavioral science. NJ: John Wiley & Sons.
Everitt, B. S. & Skrondal, A. (2010). The Cambridge dictionary of statistics.
(4th ed.). Cambridge, UK: Cambridge University Press.
Ezzy, D. (2002). Qualitative analysis: Practice and innovation. London:
Routledge.

Bibliography 729

Fairclough, N. (1992). Discourse and social change. Cambridge: Polity
Press.

Fairclough, N. (1995). Critical discourse analysis: The critical study of
language. New York: Longman.

Faltis, C. (1997). Case study methods in researching language and
education. In N. H. Hornberger & D. Corson (Eds.), Research methods in
language and education (pp. 145-153). Encyclopedia of Language and
Education. (Vol. 8). Dordrecht: Kluwer.

Farhady, H. (1995). Research methods in applied linguistics. Tehran:
Payame Noor University.

Farhady, H., Jafarpur, A., & Birjandi, P. (1995). Testing language skills:
From theory to practice (2nd ed.). Tehran: SAMT Publication.

Fernandez-Ballesteros, R. (2003). Encyclopedia of psychological
assessment. (Vol. 1 & 2). Thousand Oaks, CA: Sage.

Fetterman, D. (1989). Ethnography step by step. Newbury Park, CA: Sage.
Field, A. (2005). Discovering statistics using SPSS (2nd ed.). London: Sage.
Firth, A., & Wagner, J. (1997). On discourse, communication and some

fundamental concepts in SLA research. Modern Language Journal, 81,
285-300.
Fischer, C. T. (Ed.). (2006). Qualitative research methods for psychologists:
Introduction through empirical studies. Boston: Academic Press.
Flick, U. (2004). Design and process in qualitative research. In U. Flick, E.
von Kardoff & I. Steinke (Eds.), A companion to qualitative research
(146-52). London: Sage.
Flick, U. (2006). An introduction to qualitative research (3rd ed.). Thousand
Oaks, CA: Sage.
Flick, U., von Kardoff, E., & Steinke, I. (Eds.). (2004). A companion to
qualitative research. Translated by B. Jenner. London: Sage.
Flood, J., Lapp, D., Squire, J. R., & Jensen, J. M. (Eds.). (2005). Methods of
research on teaching the English language arts: the methodology
chapters from the Handbook of research on teaching the English
language arts (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.
Floud, R. (1979). An introduction to quantitative methods for historians
(2nd ed.). London: Methuen.
Fosnot, C. T. (Ed.). (2005). Constructivism: Theory, perspectives, and
practice (2nd ed.). New York: Teachers College Press.
Fotos, S., Browne, C. (Eds.). (2004). New perspectives on CALL for second
language classrooms. Mahwah, NJ: Lawrence Erlbaum Associates.
Fowler, F. J. (2002). Survey research methods (3rd ed.). Thousand Oaks,
CA: Sage.
Fraenkel, J. R., & Wallen, N. E. (2009). How to design and evaluate
research in education (5th ed.). New York: McGraw-Hill.

730 Bibliography

Freeman, D. (1998). Doing teacher research. New York: Heinle & Heinle.
Frowley, W., & Lantolf, J. P. (1984). Speaking and self-order: A critique of

orthodox L2 research. Studies in Second Language Acquisition, 6 (2),
143-159.
Fulcher, G. and Davidson, F. (2007). Language testing and assessment: An
advanced resource book. London: Routledge.
Gall, M. D., Borg, W. R., & Gall, J. P. (2006). Educational research: An
introduction (8th ed.). Upper Saddle River, NJ: Pearson Education.
Gamst, G., Meyers, L. S., & Guarino, A. J. (2008). Analysis of variance
designs: A conceptual and computational approach with SPSS and SAS.
Cambridge, UK: Cambridge University Press.
Gao, Y. H., Li, L. C., & Lü, J. (2001). Trends in research methods in
applied linguistics: China and the West. English for Specific Purposes,
20, 1-14.
Garcez, P. M. (1997). Microethnography. In N. H. Hornberger & D. Corson
(Eds.), Research methods in language and education (pp. 187-198).
Encyclopedia of Language and Education, Vol. 8. Dordrecht: Kluwer.
Garfinkel, H. (1967). Studies in ethnomethodology. Englewood Cliffs, NJ:
Prentice-Hall.
Gass, S. M. (2001). Innovations in second language research methods.
Annual Review of Applied Linguistics, 21, 221-232.
Gass, S. M., & Mackey A. (2000). Stimulated recall methodology in second
language research. Mahwah, NJ: Lawrence Erlbaum Associates.
Gay, L.R., Mills, G., & Airasian, P. W. (2008). Educational research:
Competencies for analysis and applications (9th ed.). Upper Saddle
River, NJ: Prentice Hall.
Gee, J.P. (2005). An introduction to discourse analysis: Theory and method.
New York: Routledge.
Geertz, C. (1973). The interpretation of cultures. New York: Basic Books.
Geisinger, K., Spies, R., Carlson, J., & Plake, B. (Eds.). (2007). The
seventeenth mental measurements yearbook. Lincoln: University of
Nebraska, Buros Institute of Mental Measurements.
George, A., & Bennett, A. (2005). Case study and theory development in the
social sciences. Cambridge: MIT Press.
Gerring, J. (2007). Case study research: Principles and practices.
Cambridge, UK: Cambridge University Press.
Gibbons, J. D. (1976). Nonparametric methods for quantitative analysis.
New York: Holt, Rinehart & Winston.
Gibbons, P. (2006). Bridging discourses in the ESL classroom: Students,
teachers and researchers. London: Continuum.
Giere, R. N. (2004). Understanding scientific reasoning. Graton, CA:
Wadsworth.

Bibliography 731

Given, L. M. (2008). The Sage encyclopedia of qualitative research
methods. Thousand Oaks, CA: Sage.

Glass, G. V., & Hopkins, K. D. (1996). Statistical methods in education and
psychology. Englewood Cliffs, NJ: Prentice Hall.

Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta-analysis in social
research. Beverly Hills, CA: Sage.

Glaser, B. G. (1992). The basics of grounded theory analysis: Emergence
vs. forcing. Mill Valley, CA: Sociology Press.

Glaser, B. G. (2001). The grounded theory perspective: Conceptualization
contrasted with description. Mill Valley, CA: Sociology Press.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory:
strategies for qualitative research. Chicago: Aldine.

Glesne, C. (2006). Becoming qualitative researchers: An introduction (3rd
ed.). Boston: Pearson Education.

GonzÁlez, J. M. (Ed.). (2008). Encyclopedia of bilingual education (Vol. 1
& 2). Thousand Oaks, CA: Sage.

Gorard, S. (2001). Quantitative methods in educational research: The role
of numbers made easy. London: Continuum.

Gorard, S. (2003). Quantitative methods in social science. London:
Continuum.

Golden-Biddle, K., & Locke, K. (1997). Composing qualitative research.
Thousand Oaks, CA: Sage.

Goldstein, T. (1995). Interviewing in multicultural/multilingual settings.
TESOL Quarterly, 29 (3), 587-593.

Goldstein, T. (1997). Language research methods and critical pedagogy. In
N. H. Hornberger & D. Corson (Eds.), Research methods in language
and education (pp. 67-78). Encyclopedia of Language and Education,
Vol. 8. Dordrecht: Kluwer.

Gomm, R., Hammersley, M., & Foster, P. (Eds.). (2000). Case study
method. Thousand Oaks, CA: Sage.

Goodley, D., Lawthorm, R., Clough, P., & Moore, M. (2004). Researching
life stories: Method, theory, and analyses in a biographical age. London:
Routledge.

Goodson, I., & Sikes, P. 2001. Life history research in educational settings:
Learning from lives. Buckingham, UK: Open University Press.

Gorard, S. & Taylor, C. (2004). Combining methods in educational and
social research. Buckingham, UK: Open University Press.

Gravetter, F. J., & Wallnau, L. B. (2010). Statistics for the behavioral
sciences (8th ed.). Belmont, CA: Wadsworth.

Gray, D. E. (2009). Research in the real world. (2nd ed.) Thousand Oaks,
CA: Sage.

732 Bibliography

Green, J. L., & Nixon, C. N. (2002). Exploring differences in perspectives
on microanalysis of classroom discourse: Contributions and concerns.
Applied Linguistics, 23, 393-406.

Greene, J. (2007). Mixed methods in social inquiry. Hoboken, NJ: Wiley.
Greene, J. (2008). Is mixed methods social inquiry a distinctive

methodology? Journal of Mixed Methods Research, 2(1), 7-22.
Greene, J., & M. D’Oliveira. (2005). Learning to use statistical tests in

psychology (3rd ed.). Milton Keynes: Open University Press.
Grbich, C. (2004). New approaches in social research. London: Sage.
Gries, S. T. (2009). Quantitative corpus linguistics with R: A practical

introduction. New York: Routledge.
Grills, S. (Ed.). (1998). Doing ethnographic research. London: Sage.
Grotjahn, R. (1987). On the methodological basis of introspective methods.

In C. Faerch & G. Kasper (Eds.), Introspection in second language
research (pp. 54-81). Clevedon: Multilingual Matters.
Grotjahn, R. (1991). The research program subjective theories: A new
approach in second language research. Studies in Second Language
Acquisition, 13, 187-214.
Guba, E. G. (Ed.). (1990). The paradigm dialog. Newbury Park, CA: Sage.
Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative
research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of
qualitative research (2nd ed.). (pp. 105-117). Thousand Oaks, CA: Sage.
Guilford J., & Fruchter, B. (1973). Fundamental statistics in psychology and
education. New York: McGraw Hill.
Gupta, A. (2006). Empiricism and experience. Oxford, UK: Oxford
University Press.
Gwet, K. (2001). Handbook of inter-rater reliability. Gaithersburg, MD:
Stataxis.
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010).
Multivariate data analysis (7th ed.). Prentice Hall.
Haladyna, T. M. (2004). Developing and validating multiple-choice test
items. Mahwah, NJ: Lawrence Erlbaum Associates.
Hammersley, M. (1990). Classroom ethnography: Empirical and
methodological essays. Milton Keynes: Open University Press.
Hammersley, M., & Atkinson, P. (1995). Ethnography. London: Routledge.
Hancock, G. R. & Mueller, R. O. (2010). The reviewer’s guide to
quantitative methods in the social sciences. London: Routledge.
Harklau, L. (2005). Ethnography and ethnographic research on second
language teaching and learning. In E. Hinkel (Ed.), Handbook of
research in second language learning (pp. 179-194). Mahwah, NJ:
Lawrence Erlbaum Associates.
Harlow, L. L. (2005). The essence of multivariate thinking: basic themes
and methods. Mahwah, NJ: Lawrence Erlbaum Associates.

Bibliography 733

Harris, R. J. (2001). A primer of multivariate statistics (3rd ed.). Mahwah,
NJ: Lawrence Erlbaum Associates.

Hart, C. (1998). Doing a literature review. London: Sage.
Hatch, E. M., & Farhady, H. (1982). Research design and statistics for

applied linguistics. Rowley, Mass.: Newbury House.
Hatch, E. M., & Lazaraton, A. (1991). The research manual: Design and

statistics for applied linguistics. New York: Newbury House.
Hatch, J. A. (2002). Doing qualitative research in education settings. State

University of New York Press.
Have, P. (1999). Doing conversation analysis. London: Sage.
Healey, J., F. (2009). Statistics: A tool for social research (8th ed.).

Belmont, CA: Wadsworth.
Heap, J. L. (1997). Conversation analysis methods in researching language

and education. In N. H. Hornberger & D. Corson (Eds.), Research
methods in language and education (pp. 217-226). Encyclopedia of
Language and Education, Vol. 8. Dordrecht: Kluwer.
Heigham, J. & & Croker, R. A. (2009). Qualitative research in applied
linguistics. New York: Palgrave Macmillan.
Heiman, G. W. (2011). Basic statistics for the behavioral sciences.
Belmont, CA: Wadsworth.
Hendricks, C. (2009). Improving schools through action research: A
comprehensive guide for educators. Upper Saddle River, NJ: Pearson.
Henn, M., Weinstein, M. & Foard, N. (2006). A short introduction to social
research. London: Sage.
Henning, G. (1986). Quantitative methods in language acquisition research.
TESOL Quarterly, 20, 701-708.
Hesse-Biber, S. N. (2010). Mixed methods research: merging theory with
practice. New York: The Guilford Press.
Hewson, C., Yule, P., Laurent, D. & Vogel, C. (2003). Internet research
methods. London: Sage.
Hildenbrand, B. (2004). Anselm Strauss. In U. Flick, E. von Kardoff & I.
Steinke (Eds.), A companion to qualitative research. London: Sage, 17-
23.
Hinkel, E. (Ed.). (2005). Handbook of research in second language
learning. Mahwah, NJ: Lawrence Erlbaum Associates.
Hinkel, E. (Ed.). (2011). Handbook of research in second language learning
(Vol. 2). Mahwah, NJ: Lawrence Erlbaum Associates.
Hinton, P. R. (2004). Statistics explained (2nd ed.). London: Routledge.
Ho, R. (2006). Handbook of univariate and multivariate data analysis and
interpretation with SPSS. Chapman and Hall/CRC.
Hock, R. (2004). Extreme searcher’s Internet handbook. Medford, NJ:
Information Today.

734 Bibliography

Holliday, A. (1996). Developing a sociological imagination: Expanding
ethnography in international English language education. Applied
Linguistics, 17 (2), 234-255.

Holliday, A. (2002). Doing and writing qualitative research. Thousand
Oaks, CA: Sage.

Holliday, A. (2004). Issues of validity in progressive paradigms of
qualitative research. TESOL Quarterly, 38 (4), 731-734.

Hood, M. (2009). Case Study. In Juanita Heigham & Robert A. Croker
(Eds.), Qualitative research in applied linguistics (pp. 66-90). New
York: Palgrave Macmillan.

Hopkins, D. A. (2008). A teacher’s guide to classroom research (4th ed.).
Buckingham, UK: Open University Press.

Hopkins, K. D., & Glass, G. V. (1996). Basic statistics for the behavioral
sciences. Englewood Cliffs, NJ: Prentice-Hall.

Hornberger, N. H. (2008). Encyclopedia of language and education (Vols 1-
10). (2nd ed.). New York: Springer.

Hornberger, N. H. (1994). Ethnography. TESOL Quarterly, 28 (4), 688-690.
Howell, D. C. (2010). Statistical methods for psychology (7th ed.). Pacific

Grove, CA: Duxbury/Thomson Learning.
Howitt, D. (2010). Introduction to qualitative research methods in

psychology. Harlow: Pearson Education.
Howitt, D. & Cramer, D. (2011). Introduction to research methods in

Psychology. (3rd ed.). Harlow: Pearson.
Huber, P. J. & Ronchetti, E. M. (2009). Robust statistics (2nd ed.). NJ: John

Wiley & Sons.
Huberty, C. (1994). Applied discriminant analysis. New York: Wiley.
Huck, S. W. (2012). Reading statistics and research (6th ed.). Boston:

Pearson Education.
Hughes, A. (2003). Testing for language teachers (2nd ed.). Cambridge,

UK: Cambridge University Press.
Hulstijn J. H. (1997). Second language acquisition research in the

laboratory: Possibilities and limitations. Studies in Second Language
Acquisition, 19, 131-143.
Hyland, K. (2002). Teaching and researching writing. London: Longman
Hyland, K. & Paltridge, B. (Eds.). (2011). Continuum companion to
discourse analysis. London: Continuum.
Janke, S. J. & Tinsley, F. C. (2005). Introduction to linear models and
statistical inference. NJ: John Wiley & Sons.
Jarvis, S. (2002a). Research in TESOL Part I. TESOL Research Interest
Section Newsletter, 8(3), 1–2.
Jarvis, S. (2002b). Research in TESOL Part II. TESOL Research Interest
Section Newsletter, 9(1), 1–2.

Bibliography 735

Johnson, A. P. (2008). A short guide to action research (3rd ed.). Boston:
Pearson Education.

Johnston, B. (1997). Do EFL teachers have careers? TOFEL Quarterly, 31
(4), 681-712.

Johnson, D. M. (1992). Approaches to research in second language
learning. New York: Longman.

Johnson, D. M., & Saville-Troike, M. (1992). Validity and reliability in
qualitative research: Two researchers comment. TESOL Quarterly, 26
(3), 602-605.

Johnson, K. (1996). The role of theory in L2 teacher education. TESOL
Quarterly, 30 (4), 765-771.

Johnson, K. (2008). Quantitative methods in linguistics. Oxford, UK:
Blackwell.

Johnson, R. B. & Christensen, L. B. (2010). Educational research:
Quantitative, qualitative, and mixed approaches (4nd ed.). Thousand
Oaks, CA: Sage.

Josselson, R. B., & Lieblich, A. (Eds.). (1993). The narrative study of lives.
Newbury Park, CA: Sage.

Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A
research paradigm whose time has come. Educational Researcher, 33(7),
14-26.

Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a
definition of mixed methods. Journal of Mixed Methods Research, 1(1),
112-133.

Kalof, L., Dan, A. & Dietz, T. (2008). Essentials of social research.
Maidenhead, UK: Open University Press.

Kane, M., & Trochim, W. M. K. (2007). Concept mapping for planning and
evaluation. Thousand Oaks, CA: Sage.

Kasper, G. (1998). Analyzing verbal protocols. TESOL Quarterly, 32 (2),
358-362.

Kaplan, R. (Ed.). (2010). The Oxford handbook of applied linguistics (2nd
ed.). Oxford, UK: Oxford University Press.

Keppel, G. & Wickens, T. D. (2004) (4th ed.). Design and analysis: A
researcher’s handbook. Upper Saddle River, NJ: Pearson/Merrill
Prentice Hall.

Kiely, R. & Rea-Dickins, P. (2005). Program evaluation in language
education. New York: Palgrave Macmillan.

King, K. & Hornberger, N. H. (2008). Research methods in language and
education: Encyclopedia of language and education (Vol. 10). (2nd ed.).
New York: Springer.

Kirk, R. E. (1982). Experimental design: Procedures for the behavioral
sciences (2nd ed.). Monterey, CA: Brooks/Cole.

736 Bibliography

Kirk, R. E. (1995). Experimental design: Procedures for the behavioral
sciences (3rd ed.). Pacific Grove, CA: Brooks/Cole.

Kirk, R. E. (1996). Practical significance: A concept whose time has come.
Educational and Psychological Measurement, 56, 746-759.

Kirk, R. E. (2008). Statistics: An introduction (5th ed.). Belmont, CA:
Wadsworth.

Kvale, S. (1996). Interviews: An introduction to qualitative research
interviewing. London: Sage.

Kemmis, S., & McTaggart, R. (Eds.). (1992). The action research planner.
Geeloong, Victoria, Australia: Deakin University Press.

Kennedy, G. (1998). An introduction to corpus linguistics. London:
Longman.

Kennedy, C. H. (2005). Single-case designs for educational research.
Boston: Pearson.

Kiel, L. D., & Elliot, E. (Eds.) (1996). Chaos theory in the social sciences:
Foundations and applications. Ann Arbor: University of Michigan Press.

Kincheloe, J. L. (2003). Teachers as researchers: Qualitative inquiry as a
path to empowerment (2nd ed.). London: Routledge.

Kline, R. (2004). Beyond significance testing: Reforming data analysis
methods in behavioral research. Washington, DC: American
Psychological Association.

Kothari, C. R. (2008). Research methodology: Methods and techniques (2nd
ed.). New Delhi: New Age International Publishers.

Kouritzin, S. (2000). Bringing life to research: Life history research and
ESL. TESL Canada Journal, 17 (2), 1-35.

Krathwohl, D. R. (1998). Methods of educational and social science
research: An integrated approach (2nd ed.). New York: Longman.

Krippendorp, K. (2004). Content analysis: An introduction to its
methodology. Thousand Oaks, CA: Sage.

Kubiszyn, T., & Borich, G. (2006). Educational testing and measurement.
New York: Wiley.

Kumaravadivelu, B. (1999). Critical classroom discourse analysis. TESOL
Quarterly, 33 (3), 453-484.

Kumaravadivelu, B. (2006a). Understanding language teaching: From
method to postmethod. Mahwah, NJ: Lawrence Erlbaum.

Kumpulainen, K., & Wray, D. (Eds.). (2002). Classroom interaction and
social learning: From theory to practice. London: Routledge.

Landau, S. & Everitt, B. (2004). Handbook of statistical analysis using
SPSS. Boca Raton, FL: Chapman & Hall/CRC.

Lantolf, J. P. (1996). Second language acquisition theory-building: Letting
all the flowers bloom! Language Learning, 46 (4), 713-749.

Lapan, D. S. & Quartaroli, M. T. (Eds.). (2009). Research essentials: An
introduction to designs and practices. San Francisco: Jossey-Bass.

Bibliography 737

Lapp, D., & Fisher, D. (Eds.). (2011). Handbook of research on teaching
the English language arts (3nd ed.). New York: Routledge.

Larsen-Freeman, D. (1997). Chaos/complexity science and second language
acquisition. Applied Linguistics, 18 (2), 141-165.

Larsen-Freeman, D., & Long, M. H. (1991). An introduction to second
language acquisition research. New York: Longman.

Larson-Hall, J. (2010). A guide to doing statistics in second language
research using SPSS. New York: Routledge.

Lavrakas, P. J. (2008). Encyclopedia of survey research methods (Vol. 1 &
2). Thousand Oaks, CA: Sage.

Lawrence-Lightfoot, S., & Davis, J. H. (1997). The art and science of
portraiture. San Francisco: Jossey-Bass.

Lazaraton, A. (1991). A computer supplement to the research manual. New
York: Newbury House.

Lazaraton, A. (1995). Qualitative research in TESOL: A progress report.
TESOL Quarterly, 29 (3), 455-472.

Lazaraton, A. (1998). Research methods in applied linguistics journal
articles. TESOL Research Interest Section Newsletter, 5 (2), 3.

Lazaraton, A. (2000). Current trends in research methodology and statistics
in applied linguistics. TESOL Quarterly, 34 (1), 175-181.

Lazaraton, A. (2002). Quantitative and qualitative approaches to discourse
analysis. Annual Review of Applied Linguistics, 22, 32-51.

Lazaraton, A. (2003). Evaluative criteria for qualitative research in applied
linguistics: Whose criteria and whose research? Modern Language
Journal, 87 (1), 1-12.

Lazaraton, A. (2005). Quantitative research methods. In E. Hinkel (Ed.),
Handbook of research in second language learning (pp. 209-224).
Mahwah, NJ: Lawrence Erlbaum Associates.

Lazaraton, A. (2009). Discourse analysis. In Juanita Heigham & Robert A.
Croker (Eds.), Qualitative research in applied linguistics (pp. 242-259).
New York: Palgrave Macmillan.

Leary, M. R. (2011). Introduction to behavioral research methods (6th ed.).
Englewood Cliffs, NJ: Prentice-Hall.

LeCompte, M., Millroy, W. L. & Preissle, J. (Eds.). (1992). The Handbook
of qualitative research in education. London: Academic Press.

LeCompte, M. & Preissle, J. (1993). Ethnography and qualitative design in
educational research (2nd ed.). London: Academic Press.

Lee, E., & Simon-Maeda, A. (2006). Racialized research identities in
ESL/EFL research. TESOL Quarterly, 40 (3), 573-594.

Leech, N. A., Barrett, K. C., & Morgan, G. A. (2005). SPSS for intermediate
statistics: Use and interpretation (2nd ed.). Mahwah, NJ: Lawrence
Erlbaum Associates.

738 Bibliography

Leech, N. L., & Onwuegbuzie, A. J. (2007). An array of qualitative data
analysis tools: A call for qualitative data analysis triangulation. School
Psychology Quarterly, 22 (4), 557-584.

Leow, R. P., & Morgan-Short, K. (2004). To think aloud or not to think
aloud. Studies in Second Language Acquisition, 26 (1), 35-57.

Levy, M. (1997). CALL: Context and Conceptualisation. Oxford, UK:
Oxford University Press.

Levy, P. S. & Lemeshow, S. (1999). Sampling of populations: Methods and
applications. New York: Wiley-Interscience.

Lewins, A., & Silver, C. (2007). Using software in qualitative analysis.
Thousand Oaks, CA: Sage.

Lewis-Beck, M. (Ed.). (1993). Experimental design and methods. Thousand
Oaks, CA: Sage.

Liamputtong, P., & Ezzy, D. (2005). Qualitative research methods (2nd
ed.). Melbourne, Australia: Oxford University Press.

Lieblich, A., Tuval-Mashiach, R., & Zilber, T. (1998). Narrative research:
Reading, analysis and interpretation. Thousand Oaks, CA: Sage.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverley Hills:
Sage.

Lipsey, M. W., & Wilson, D. B. (2000). Practical meta-analysis. Thousand
Oaks, CA: Sage.

Litosseliti, L. (Ed.). (2010). Research methods in linguistics. London:
Continuum.

Locke, L. F., Silverman, S. J., & Spirduso, W. W. (1998). Reading and
understanding research. Thousand Oaks, CA: Sage.

Locke, L., Silverman, S., & Spirduso, W. (2007). Proposals that work: A
guide for planning dissertations and grant proposals. Thousand Oaks,
CA: Sage.

Lodico, M. G., Spaulding, D. T. & Voegtle, K. H. (2010). Methods in
educational research: From theory to practice (2nd ed.). San Francisco:
Jossey-Bass.

Loehlin, J. (2004). Latent variable models: An introduction to factor, path,
and structural equation analysis. Mahwah, NJ: Lawrence Erlbaum
Associates.

Lohr, S. L. (1998). Sampling: design and analysis. Pacific Grove, CA:
Brooks/Cole.

Lomax, R. G. (2007). An introduction to statistical concepts for education
and behavioral sciences (2nd ed.). Mahwah, NJ: Lawrence Erlbaum
Associates.

Long, M. H. (1990). The least a second language acquisition theory needs to
explain. TESOL Quarterly, 24 (4), 649-666.

Long, M. H. (1997). Construct validity in SLA research: A response to Firth
and Wagner. Modern Language Journal, 81 (3), 318-323.


Click to View FlipBook Version