40 F. Hesse et al.
mental models has shown that teams demonstrate better problem solving
performances if the individual problem representations (the individual mental mod-
els of the problem) are similar among group members (Klimoski and Mohammed
1994). Similarity among representations can be achieved through communication.
In contrast to a shared mental model approach that just looks at similarities among
individual representations, Roschelle and Teasley (1995) have proposed the concept
of a joint problem space. This problem space is created and maintained through
constant coordination and communication among collaborators, and serves as a
basis for collaborative action.
Collaborators need a shared plan on how to achieve a goal state. Collaborative
planning needs to include the management of resources. Research on transactive
memory systems (Wegner 1986) has shown that groups benefit if members know
who knows what or who has identified specific elements of the problem space in a
group. In the case of groups composed of members with different problem-relevant
knowledge (i.e., consistent with the requisite features of problems that might justify
collaboration), the management of resources ideally takes into account that group
members share all available information. The occurrence of information sharing is
far from guaranteed: social psychological research has demonstrated that group
members tend to mention shared information but neglect unshared information that
is unique to only one group member (Stasser and Titus 1985). Resource allocation
is not limited to knowledge. It also needs to include the identification of capacity to
perform processing and the monitoring of processes.
Plans must be executed by the group. In some collaborative problem solving
situations this requires an orchestrated effort by several group members in parallel.
One of the pitfalls of collaborative action is that groups typically suffer from pro-
cess losses (Steiner 1972), i.e., groups perform worse than they ideally could, given
the members’ abilities and resources. Process losses can be caused by group mem-
bers’ reduced task motivation (social loafing; Karau and Williams 1993), by addi-
tional social goals resulting from the group situation that are taking away resources
from the task (Wittenbaum et al. 2004), and by reduced cognitive capacity due to the
social situation (Diehl and Stroebe 1987).
Progress and courses of action must be evaluated, plans must be reformulated if
necessary, and collaborators must decide on how to proceed. This again involves the
risk of process losses. The analysis of monitoring activities can be informed by
research on how groups implicitly and explicitly orchestrate decision making. For
instance, groups can be characterised through their use of implicit social decision
schemes like “truth wins”, “majority wins”, or “plurality wins” (Laughlin and Ellis
1986). Moreover, groups can be differentiated by their explicit timing of decision
making procedures. While some groups start by making decisions and then seek
evidence that supports their decisions, other groups demonstrate a deliberative
approach that starts with the seeking of evidence and then converges on a decision
(Hastie and Pennington 1991). More generally, the successful allocation of resources
requires awareness of a group’s progress concerning the problem it faces and the
resources available within the group, and is facilitated by a shared understanding of
the desired state (Peterson and Behfar 2005).
2 A Framework for Teachable Collaborative Problem Solving Skills 41
In this logical sequence of processes, participants externalise their individual
problem solving processes, and coordinate these contributions into a coherent
sequence of events. The degree to which this idealised sequence takes place in real-
ity is unclear. In any given case, its occurrence will be dependent not only on the
groups’ dynamics but on the characteristics of the problem space.
Collaborative problem solving is not a uniform process but a complex, coordi-
nated activity between two or more individuals. Consequently, efficient problem
solving does not rely on a uniform skill but rather a set of distinguishable sub-
skills which are deployed in accordance with situational needs. While the five
processes mentioned above (problem identification, problem representation,
planning, executing, monitoring) can serve to describe collaborative problem
solving, it is not the case that collaborative problem solving skills can be easily
mapped to the different stages. Rather, many skills cut across several problem
solving stages.
Collaborative Problem Solving Skills
Based on the literature in several research fields, the ATC21STM project1 has devel-
oped a framework consisting of a hierarchy of skills that play a pivotal role in
collaborative problem solving. The identified skills must fulfill three criteria:
(1) they must be measurable in large-scale assessment, (2) they must allow the
derivation of behavioural indicators that (after some training) can be assessed by
teachers in a classroom setting, and (3) they must be teachable. Only if these three
conditions are met will collaborative problem solving skills become a part of learn-
ing diagnostics, both in everyday classroom practice and in large-scale assessment
studies like PISA (OECD 1999).
The framework of collaborative problem solving skills proposed here is based on
the distinction between two very broad skill classes: social skills and cognitive
skills. Social skills constitute the “collaborative” part of “collaborative problem
solving”. They play an important role in collaborative problem solving but are also
a feature of many other collaborative tasks. Cognitive skills constitute the “problem
solving” part of “collaborative problem solving”. These skills address typical cogni-
tive issues of problem solving and have more in common with classical approaches
to individual problem solving. To clarify this distinction it can be said that the social
skills are about managing participants (including oneself), whereas cognitive skills
are about managing the task at hand. In the following, both classes of skill are
described and discussed in more detail.
1 The acronym ATC21STM has been globally trademarked. For purposes of simplicity the acronym
is presented throughout the chapter as ATC21S.
42 F. Hesse et al.
Social Process Skills
In order to be successful in collaborative problem solving, individuals need a
number of social skills to help them coordinate actions in synchrony with other
participants. Our conceptualisation of social skills refers in particular to three
classes of indicators that can be subsumed under the general rubric of social skills:
participation, perspective taking, and social regulation (Table 2.1). Participation
describes the minimum requirements for collaborative interaction. It refers to the
willingness and readiness of individuals to externalise and share information and
thoughts, and to be involved in the stages of problem solving (Stasser and Vaughan
1996). The concept of perspective taking skills refers to the ability to see a problem
through the eyes of a collaborator (Higgins 1981). This can be extremely helpful, as
it allows for smoother coordination among collaborators. Moreover, for particular
types of tasks, perspective taking skills are essential, as a group cannot come to a
solution unless its members have the capacity to understand the concrete situation
their collaborators are in (e.g., Trötschel et al. 2011). Finally, the concept of social
regulation skills refers to the more strategic aspects of collaborative problem solv-
ing (Peterson and Behfar 2005). Ideally, collaborators use their awareness of the
strengths and weaknesses of all group members, to coordinate and resolve potential
differences in viewpoints, interests and strategies.
Participation Skills
Many accounts in the learning sciences stress the importance of participation, albeit
with slightly different focuses. According to socio-constructivist epistemologies,
participation refers to the long-term process of becoming part of a community of
practice (Lave and Wenger 1991). At first, learners take a peripheral role in a
community (legitimate peripheral participation), but once they become more expe-
rienced as community members they take on more responsibilities. According to a
cognitively and linguistically oriented epistemology, participation refers to the
observable action of engaging in discourse. In this research tradition, Cohen (1994)
suggested that the extent to which learners participate in a collaborative activity
is the best predictor of individual learning outcomes, provided that a task is collab-
orative (i.e. it cannot be accomplished by division of labour alone) and provided that
the problem is relatively ill-structured. Whichever epistemology is preferred, par-
ticipation is regarded as a crucial concept in the learning sciences that constitutes or
at least leads to learning.
Within the range of participation skills, our framework further distinguishes
between three aspects: action, interaction, and task completion. “Action” refers to
the general level of participation of an individual, irrespective of whether this action
is in any way coordinated with the efforts of other group members. While most
classical psychologists would argue that actions are just behavioural consequences
of internal, cognitive processes, many learning scientists regard actions as the
fundamental “carriers” of cognition (Hutchins 1995; Nardi 1996). Problem solvers
2 A Framework for Teachable Collaborative Problem Solving Skills 43
Table 2.1 Social skills in collaborative problem solving
Element Indicator Low Middle High
Participation
Action Activity within No or very Activity in Activity in
environment little activity familiar familiar and
contexts unfamiliar
Interaction Interacting with, Acknowledges contexts
prompting and communication Responds to Initiates and
Task completion/ responding to the directly or cues in promotes
perseverance contributions of indirectly communication interaction or
others activity
Maintains Identifies and
Undertaking and presence only attempts the Perseveres in task
completing a task task as indicated by
or part of a task repeated attempts
individually Contributions or multiple
or prompts of strategies
Perspective taking others are
adapted and Contributions or
Adaptive Ignoring, Contributions incorporated prompts of others
responsiveness accepting or or prompts Contributions are used to
adapting from others are are modified suggest possible
contributions of taken into for recipient solution paths
others account understanding Contributions are
in the light of tailored to
Audience Awareness of Contributions deliberate recipients based
awareness how to adapt are not tailored feedback on interpretation
(Mutual behaviour to to participants of recipients’
modelling) increase Attempts to understanding
suitability for reach a
others common Achieves
understanding resolution of
Social regulation Achieving a Comments on Comments on differences
Negotiation resolution or differences own
reaching performance in Infers a level of
Self evaluation compromise Notes own terms of capability based
(Metamemory) performance appropriateness on own
Recognising own or adequacy performance
strengths and Comments on
weaknesses performance of Comments on
others in terms expertise available
Transactive Recognising Notes of based on
memory strengths and performance of appropriateness performance
weaknesses of others or adequacy history
others Completes
activities and Assumes group
Responsibility Assuming Undertakes reports to responsibility as
initiative responsibility for activities others indicated by use
ensuring parts of largely of first person
task are completed independently plural
by the group of others
44 F. Hesse et al.
differ in the level of sophistication with which they act in a group. While some
problem solvers do not become active at all, others become active once the environ-
ment is highly scaffolded (e.g. through explicit task instructions). Finally, the most
sophisticated way of acting in a group is demonstrated by those who have the ability
to perform actions even in the absence of instructional scaffolds.
“Interaction” refers to behaviour that demonstrates interaction with and responses
to others. For instance, some learners are highly active in collaborative problem
solving, but fail to respond to or coordinate with their collaborators. A higher level
of interaction skill is exemplified by problem solvers who respond to cued interac-
tion, e.g. by answering an inquiry from a collaborator. The highest level of interac-
tion skill manifests itself if learners actively initiate coordination efforts, or prompt
their collaborators to respond. Interaction among problem solvers is a minimum
requirement for successful coordination (Crowston et al. 2006) and it is achieved
through verbal and nonverbal means (Clark 1996).
“Task completion” skills refer to motivational aspects of participation and conse-
quent perseverance on a task. Collaborative problem solvers differ in the degree to
which they feel committed to the activity. Accordingly, they may enter the problem
solving space but not be sufficiently engaged to remain actively involved, or at the
other end of the spectrum, may persist in engagement as indicated by multiple
attempts at tasks or by trying different strategies.
Perspective Taking Skills
While the quantity of participation is an important predictor of collaborative prob-
lem solving performance, perspective taking skills revolve more around the quality
of interaction. Theoretically, perspective taking can be linked to constructs that stem
from sub-disciplines as diverse as psychology of emotion, social psychology, and
psycholinguistics, and consequently perspective taking encompasses affective,
social-developmental, and linguistic aspects. Perspective taking is a multidimen-
sional construct. On an affective level, perspective taking can be linked to the notion
of empathy and the emotional understanding of, and identification with, others.
More important in the current context, on a cognitive level, perspective taking is
related to “theory of mind” concepts, and it describes the ability to understand a
state of affairs from a different spatial or psychological perspective. If this ability is
not in place, people are subject to egocentric bias, i.e. they expect others to be
highly similar to themselves (Zuckerman et al. 1983). Perspective taking is often
considered a core communicative competence (Weinstein 1969). Finally, a linguis-
tic aspect of perspective taking refers to the ability to contextualise utterances of
peers by reference to background information, but also the ability to tailor one’s
own utterances to the needs and intellectual capabilities of peer learners. This abil-
ity is often subsumed under the label of ‘audience design’ (Clark and Murphy
1982). It should be noted that while there is a general consensus among scholars that
audience design is helpful to coordinate mutual activities, empirical evidence indi-
cates that participants sometimes lack the ability or willingness to adapt to their
communication partners (e.g. Horton and Keysar 1996).
2 A Framework for Teachable Collaborative Problem Solving Skills 45
The framework of collaborative problem solving skills distinguishes between
two aspects of perspective taking skills: responding skills and audience awareness
skills. Responding skills become apparent when problem solvers manage to inte-
grate contributions of collaborators into their own thoughts and actions. For instance,
problem solvers who rethink a problem representation based on evidence that was
reported by a collaborator exhibit a high degree of responding skill. In contrast,
ignoring contributions from others exemplifies a low degree of responding skill.
Audience awareness skills are constituted by the ability to tailor one’s contribu-
tions to others (Dehler et al. 2011). Depending on variables like the amount of
egocentric bias, problem solvers are more or less skilled in adapting their utterances
to the viewpoints of others, or to making their actions visible and comprehensible to
their collaborators. For example, imagine two problem solvers who are placed on
different sides of a transparent screen. For a particular object on the left side from a
problem solver’s point of view, low audience awareness would be exhibited by
referring to the object as being “on the left side”. In contrast, higher audience aware-
ness would be exemplified by referring to the object as being “on the right side” or
even “on your right side”.
To clarify the distinction between responding skills and audience awareness
skills it can be said that the former involve the ability to be adaptive in one’s inter-
nalisations of information (similar to Piaget’s accommodation; Piaget and Inhelder
1962), whereas the latter involve the ability to be adaptive in one’s externalisations
of knowledge. The two aspects of perspective taking explicated in the current frame-
work can thus be characterised respectively as receptive and expressive.
Social Regulation Skills
One of the main benefits of collaborating in a group is the potential diversity group
members bring to their interactions. Different members have different knowledge,
different expertise, different opinions, and different strategies. Evidence for the
power of diversity has been found in the research of various disciplines that analyse
group performance. For instance, in organisational psychology the concept of infor-
mational diversity among team members was identified as a key ingredient of team
performance (De Wit and Greer 2008). The effects of diversity are particularly posi-
tive when group tasks require creativity and elaboration (van Knippenberg and
Schippers 2007). In education, diversity among group members is considered to
stimulate useful cognitive conflict (Doise and Mugny 1984), conceptual change
(Roschelle 1992), or multiperspectivity (Salomon 1993). However, diversity per se
is not in itself valuable and only becomes useful in collaboration when participants
know how to deal with the diversity of viewpoints, concepts, and strategies under
discussion (van Knippenberg et al. 2004). In other words, collaborative problem
solvers need strategic skills to harness the diversity of group members, and they
must employ mechanisms of social regulation and negotiation (Thompson et al.
2010) that act appropriately on group diversity. Groups have a tendency not to make
use of the full potential of diversity (Hinsz et al. 1997). Among other things, dissent-
ing information is often disregarded by individuals (confirmation bias; Jonas et al.
46 F. Hesse et al.
2001), shared information is preferred over unshared information (Stasser and Titus
1985), and minority viewpoints have less influence than majority viewpoints (Wood
et al. 1994). If group members possess the skills to overcome biased information
handling in groups and can regulate conflicts, they can fully exploit the benefits of
diversity that their collaborators bring into the joint problem solving effort.
The framework of collaborative problem solving skills distinguishes four aspects
that can be related to social regulation: metamemory, transactive memory, negotia-
tion and initiative. The first two of these aspects refer to the ability to recognise
group diversity, which breaks down into knowledge about oneself (metamemory;
Flavell 1976), and knowledge about the knowledge, strengths, and weaknesses of
one’s collaborators (transactive memory; Wegner 1986). If these two skills are
employed, collaborative problem solving groups will lay the groundwork to harness
the power of group diversity.
The presence or absence of negotiation skills becomes apparent when conflicts
arise among group members. These may be conflicts about how to represent a prob-
lem, about potential solution steps, about how to interpret evidence that is available
to the group, or about the group’s goals. In any of these cases, problem solvers must
negotiate the steps and measures that accommodate the differences between indi-
vidual approaches, for example by formulating compromises or by determining
rank orders among alternative solution steps.
Finally, the term initiative skills refers to the responsibility that a problem solver
experiences for the progress of the group. If this collective responsibility
(Scardamalia 2002) is too low, lurking behaviour or disengagement from the task
becomes likely, and it could be that the collaborative task becomes unsolvable. In
contrast, higher responsibility is likely to contribute to better problem solving per-
formance. While some problem solvers shun confrontation or even interaction by
focusing on their individual solution attempts, others will take responsibility for
working on a shared problem representation, developing a strategic plan towards a
solution, and regularly monitoring activities on the group’s progress.
If these different skills of social regulation are apparent in a group, the coordina-
tion of collaborative problem solving activities becomes much easier, and the poten-
tial diversity among group members will be exploited in highly beneficial ways.
Cognitive Process Skills
The effectiveness and efficiency of collaborative problem solving relies not only on
social skills but also on cognitive skills. Cognitive skills of collaborative problem
solving are highly similar to those skills that are conducive to individual problem
solving, and they refer to the ways in which problem solvers manage the task at
hand and the reasoning skills employed. The framework of collaborative problem
solving categorises cognitive skills across planning, executing and monitoring, flex-
ibility, and learning. Planning skills consist in an individual’s capability to develop
strategies based on plausible steps towards a problem solution (Miller et al. 1960).
2 A Framework for Teachable Collaborative Problem Solving Skills 47
In the case of collaborative problem solving, plans need to address a shared problem
representation and provide the basis for an orchestrated and well coordinated
problem solution (Weldon and Weingart 1993). While planning refers to prospective
actions like building hypotheses, executing and monitoring is of a more retrospec-
tive nature. Problem solvers must interpret evidence, and must reflect on the
appropriateness of planned and executed solution steps (Peterson and Behfar 2005).
Monitoring is considered here as an individual-level skill, because it is more
effective when it is done individually and externalised afterwards than when learn-
ers reflect jointly about the group process (Gurtner et al. 2007). This serves as a
basis for the continuing adjustment of plans, thereby setting in motion a cyclical
problem solving behaviour. Flexibility skills are demonstrated in the creativity that
problem solvers exhibit when facing a particularly challenging part of a problem
solution (Star and Rittle-Johnson 2008), but also include the way problem solvers
react to ambiguous situations. These are particularly important if the problems are
ill-defined and require some sort of inductive thinking. Finally, learning skills are
demonstrated in the ability to learn during group interaction or as a consequence of
group interaction. They lead to knowledge building. These four cognitive skill
classes are elaborated in Table 2.2.
Table 2.2 Cognitive skills in collaborative problem solving
Element Indicator Low 0 Middle 1 High 2
Task regulation
Organises Analyses and Problem is stated Problem is Identifies necessary
(problem describes a as presented divided into sequence of
analysis) problem in subtasks subtasks
familiar Sets general goal
Sets goals language such as task Sets goals for Sets goals that
Sets a clear goal completion subtasks recognise
Resource for a task relationships
management Uses/Identifies Allocates between subtasks
Manages resources (or people or
Flexibility and resources or directs people) resources to a Suggests that
ambiguity people to without task people or resources
complete a task consultation be used
Collects Inaction in
elements of Accepts ambiguous Notes Explores options
information ambiguous situations ambiguity and
situations suggests Identifies need for
Identifies the options information related
Explores and need for to current,
understands information Identifies the alternative, and
elements of the related to nature of the future activity
task immediate information
activity needed for (continued)
immediate
activity
48 F. Hesse et al.
Table 2.2 (continued)
Element Indicator Low 0 Middle 1 High 2
Trial and error Purposeful Systematically
Systematicity Implements actions sequence of exhausts possible
possible actions solutions
solutions to a Focused on
problem and isolated pieces of Links Formulates
monitors information elements of patterns among
progress information multiple pieces of
Activity is information
Learning and knowledge building undertaken with Identifies
little or no short Uses understanding
Relationships Identifies understanding of sequences of of cause and effect
(Represents and connections and consequence of cause and to plan or execute a
formulates) patterns between action effect sequence of actions
and among Plans a strategy
elements of Maintains a Tries based on a
knowledge single line of additional generalised
approach options in understanding of
Rules: “If … Uses light of new cause and effect
then” understanding of information or Reconstructs and
cause and effect lack of reorganises
to develop a progress understanding of
plan the problem in
search of new
Hypothesis Adapts solutions
“what if…” reasoning or
(Reflects and course of action
monitors) as information
or circumstances
change
Task Regulation Skills
“Planning” is one of the core activities of problem solving (Gunzelmann and
Anderson 2003). On the basis of a (joint) problem space, planning involves the
formulation of hypotheses concerning how to reach the goal, and the selection of
steps that move the problem-solving process forward. Planning is a crucial meta-
cognitive activity, as it requires problem solvers to reflect on their own (and others’)
cognitive processes (Hayes-Roth and Hayes-Roth 1979). We distinguish between
four aspects of planning: problem analysis, goal setting, resource management and
complexity. Planning begins with a problem analysis, an inspection of the individ-
ual or joint representation of a problem through which the task is segmented into
sub-tasks with consequent sub-goals. Sub-tasks and sub-goals can not only make
the problem solving process more tractable, they can also serve as important yard-
sticks to evaluate one’s progress (i.e., monitoring). A good problem solver is able to
formulate specific goals (“Next, we must move this block one tile to the left”),
2 A Framework for Teachable Collaborative Problem Solving Skills 49
whereas lower sophistication is exhibited by formulating no goals or very vague
ones (“We must try our best to change those blocks”). Research on teamwork has
shown that goal specificity improves a group’s performance (Weldon and Weingart
1993). The more a problem solver is inclined to set specific goals, the easier it is to
assess and ultimately achieve them. Many collaborative problem solving tasks can
only be accomplished if available resources are distributed properly. Resource man-
agement reflects the ability to plan how collaborators can bring their resources, their
knowledge, or their expertise into the problem solving process. A low level of
resource management skills is evident if a problem solver only plans with those
resources that are available to herself. Suggesting that collaborators make use of
specific resources indicates better resource management skills, whereas the highest
skill level is exhibited when problem solvers explicitly decide on allocation of
resources to people and/or task components. Therefore, an important aspect of plan-
ning is to manage resources that are available to oneself and to one’s collaborators
(Brown 1987). Finally, plans can differ in complexity or sophistication. This can
best be described by reference to a chess match. If a piece is moved without prior
reflection, planning complexity is low. If a sequence of moves is planned, and if
potential counter moves are reflected in parallel plans of alternative routes, higher
complexity in planning skill is demonstrated. To address these issues the framework
of collaborative problem solving skills introduces the skill class of fluidity prob-
lems, which breaks down into two aspects: tolerance for ambiguity, and breadth.
Different levels of ambiguity tolerance lead to different problem solving behav-
iours – some problem solvers become active only in unambiguous situations, some
react to ambiguity by exploring the problem space, while problem solvers with high
levels of ambiguity tolerance are likely to interpret ambiguous situations in a way
that helps them in their decision making about the next solution step. As to breadth,
a low skill level is displayed if problem solvers follow only a single approach of
inquiry. A medium level of flexibility entails trying multiple approaches once an
impasse is reached, or once new evidence is available via monitoring. And a high
level of breadth leads to a re-organisation of problem representation or planning
activities if progress through the problem space is impeded.
Problem solving is an activity that requires participants to cope with various bar-
riers. For instance, most problems are inherently ambiguous because the best pos-
sible solution step is not always easily identifiable. Moreover, solution steps might
lead to an impasse which represents a failure of the effort as it was originally
planned. It is not uncommon for problem solvers to withdraw from a problem when
they perceive roadblocks along the way to a solution. This can happen with all kinds
of problems but it becomes particularly important for ill-defined problems that are
ambiguous by definition. Tolerance for ambiguity (Norton 1975) is a characteristic
of problem solvers that can help to overcome the barriers in problem solving activi-
ties. Moreover, good problem solvers are adept at changing plans in a flexible
manner.
Research on human and machine problem solving has identified a number of
recurring strategies that describe different approaches on how to tackle a problem.
50 F. Hesse et al.
For instance, one approach was termed ‘forward search’ (Newell and Simon 1972),
and it can be characterised by taking a current problem state and identifying the
most promising operator or move, thereby working towards the goal state. Variants
of forward search include a breadth-first search (sequentially checking potential
next moves) and depth-first search (following the most promising move until an
impasse is reached). ‘Backward search’ through a problem space is the counterpart
to forward search, and it starts with identifying the most likely or promising ante-
cedent of a goal state, thereby working backwards through problem space. Backward
search and forward search have been combined by Newell and Simon (1972), who
have developed a means-ends-analysis based on the idea of selecting actions that
minimise the difference between current state and goal state. This means-ends-
analysis effectively comprises both forward search and backward search. However,
while this and similar techniques can help to describe well-defined problems
formally, they do not fully capture the complexity of ill-defined problems. For
instance, many real-world problems are “wicked” because problem solvers lack
necessary information (Van Gundy 1987). Realising that some crucial information
is missing, and developing strategies on how to acquire this information, are impor-
tant monitoring activities. In collaborative problem solving, this type of monitoring
becomes essential, as different problem solvers typically have access to different
types of information or have different means to access needed information (Larson
and Christensen 1993).
Consequently, the framework of collaborative problem solving skills distin-
guishes between two “executing and monitoring” processes: information collection
and systematicity. Information collection skill refers to the ability to identify what
information is required and how and when it can be acquired. Some problem solvers
lack the skills to identify the types of information required. Others will recognise
the nature of the information needed, but only with regard to the current activity or
problem state. Finally, a high level of these skills entails assessing the need for
information with regard to current, alternative, and future problem states.
Systematicity refers to the level of sophistication that a problem solver’s strategy
exhibits. The most basic level of systematicity involves problem solving as a trial
and error process. A medium level of systematicity is indicated by the use of for-
ward search through a problem; whereas high systematicity can be identified when
forward and backward search are combined through means-ends-analysis or similar
techniques, followed by highly reflective monitoring activities.
Learning and Knowledge Building Skills
Brodbeck and Greitemeyer (2000) have characterised learning as a by-product of
collaborative problem solving. Through progress in a collaborative problem solving
task, individuals can learn about a content domain or about strategies and skills;
they can also learn how to deal with impasses or how to coordinate, collaborate and
negotiate with others. There are different ways to conceptualise learning, and the
corresponding epistemologies for two of these have been described as participation
2 A Framework for Teachable Collaborative Problem Solving Skills 51
and acquisition metaphors (Sfard 1998). The classical acquisition metaphor regards
learning as the accumulation or restructuring of individual mental representations
that leave measurable residues after a task is completed. In this case, the amount of
learning can be measured through knowledge tests. In contrast, the participation
metaphor is heavily influenced by situated cognition (Greeno 1998) and socio-
culturalism (Vygotsky 1978), and regards learning as an activity rather than an out-
come. The role of mental representations is downplayed and, according to this
epistemology, knowledge is rather to be found in the environment (the task, the
discourse, the artifact) than in the heads of learners. A particular view of learning
that can be subsumed under the participation metaphor is knowledge building
(Scardamalia 2002). According to this view, learning is a discursive process through
which collaborators generate a network of ideas that build on each other. While the
knowledge building epistemology seeks for learning during the process of collab-
orative problem solving, the acquisition metaphor of learning would assess learning
through the transfer of skills or understandings.
The framework of collaborative problem solving skills touches on both these
aspects, characterising the two as knowledge building and learning. Knowledge
building is exemplified by the ability to take up ideas from collaborators to refine
problem representations, plans, and monitoring activities. The highest level of
knowledge building occurs in those problem solvers who are able to integrate and
synthesise the input from collaborators (Scardamalia 2002) in the description and
interpretation of a given problem. Learning is indicated by the ability to identify
and represent relationships, understand cause and effect, and develop hypotheses
based on generalisations. A low level of learning skills would be evident if the only
knowledge that is extracted from a problem solving activity stems from information
that was directly provided through instruction.
Griffin (2014) proposed a hierarchy of steps in problem solving which lead to
knowledge building. At an initial level (beyond random guessing), students rely on
identifying isolated elements of information. In a collaborative setting where infor-
mation is unevenly and asynchronously distributed, these elements need to be
shared. Problem solvers generally describe relationships or connections between
elements of information (data) and make observations that form patterns, lending
meaning to the problem space. At the next level of problem analysis, systematic
observations of cause and effect enable players to formulate and discuss the poten-
tial of rules, either for the regulation of the task or for the manner of collaboration.
At a more sophisticated level, rules are used to complete steps or parts of the prob-
lem solution. For the most difficult sub-tasks, more able students demonstrate an
ability to generalise to a range of situations by setting and testing hypotheses, using
a “What if…?” approach. An ordered progression, moving through pattern, rule and
generalisation to hypothesis, can be developed by the collaborating partners and
alternative solution options can be proposed and tested.
It is clear that there are overlapping cycles of cognitive processes across the gen-
eral skill areas of task regulation – which includes planning, executing and monitor-
ing, and comprehending complexity – and of knowledge building and learning. The
essential difference between the two general areas consists in the use made by task
52 F. Hesse et al.
regulation processes of the scoping of the problem space and the collection of
information, which contrasts with the use of this information for extrapolation pur-
poses in knowledge building and learning. For all the elements of the collaborative
problem solving framework, the notions of teachability and learnability have been
central to their conceptualisation. The rubrics in Table 2.2 give expression to the
central place of this notion/these notions, and provide nutshell glimpses of the
implications of the theoretical underpinnings of the construct for implementation in
an assessment framework.
The debt of the presented framework to the work of Polya (1973), Mayer (1983),
and the OECD PISA problem solving framework is substantial. The potential ten-
sion between a process approach to problem solving and a cognitive ability
approach is evident in the long history concerning teachability of higher-order
thinking processes. The ATC21S position, taking into account its assessment and
teaching endeavour, is that the function of assessment is primarily to provide data
to inform teaching. Consequently a process approach to collaborative problem
solving is consistent with the project’s primary goals. The extent to which individu-
als can be taught how to solve problems collaboratively is still unknown. It is clear
that the distinct classes of sub-skills outlined in the framework can be taught. What
is not so clear is whether an individual can be taught to draw on those sub-skills
appropriately. It is at this point that the distinction between the process approach
and a cognitive approach becomes the point of tension, and the focus for future
research.
Assessment of Collaborative Problem Solving Skills
In order to assess problem solving skills in educational contexts, we must think
about tasks that address the various skill classes described above. One of the deci-
sions involved in identifying tasks relates to a trade-off between task realism and
measurability. As to realism, collaborative problem solving can be found in many
everyday activities: sitting together with a colleague and trying to format a software
object; jointly developing a policy for student cafeteria use that takes into account
the interests of various stakeholders; identifying a movie that is in line with the taste
of a group of friends – all these are examples in which a group must identify a non-
obvious solution that requires shared understanding and negotiation among collabo-
rators. What these tasks often have in common is that they are ill-defined. For
instance, the desired goal state cannot be clearly described (e.g. agreeing on a good
cafeteria policy; finding a suitable movie). Furthermore, problems can be ill-defined
because individuals and groups are not fully aware of the repertoire of actions that
can lead them from the current state towards a goal state.
While many problems in real life are collaborative and ill-defined, the vast
majority of research on problem solving has dealt with well-defined problems
that are presented to individuals. A typical example for a well-researched prob-
lem is the “Tower of Hanoi” where individuals move disks according to specified
2 A Framework for Teachable Collaborative Problem Solving Skills 53
rules in order to transform an original state into a well-defined goal state.
Beginning with the seminal work by Newell and Simon (1972), an accumulation
of research evidence has begun to show how individual problem solving behav-
iour can be understood and computationally modelled as the application of sim-
ple rules and heuristics. An advantage of these well-defined tasks is that their
representational and computational dynamics are quite well understood.
Consequently, there are agreed-upon standards for how to measure problem solv-
ing effectiveness.
The differences between real-world problems and problems as they are often
analysed in psychological research raise the question of whether collaborative
problem solving is best addressed by the use of well-defined or ill-defined tasks.
Well-defined tasks allow for easier comparisons between different tasks and between
different problem solvers, thereby providing the basis for the establishment of prob-
lem solving standards. Using well-defined tasks should also increase the teachabil-
ity of collaborative problem solving, as the problem solving steps for well-defined
tasks can be easily demonstrated, understood, adopted in the pursuit of alternative
solution paths, or reflected upon. Therefore ATC21S has taken the approach that it
is desirable for the design of collaborative problem solving tasks to begin with tasks
that in some instances are designed for individual problem solving and transform
these into collaborative tasks. For example, a typical approach to create collabora-
tive (rather than cooperative) contexts is to introduce resource interdependence
(Johnson et al. 1998). Modification of tasks can be implemented in this way to
ensure that a task cannot be solved by any one individual working alone. The
disadvantage of this approach is that it may not teach students to deal with truly ill-
defined problems, since the constraints of the tasks are such that all resources are
available, notwithstanding their lack of visibility.
Summary
With its wide applicability to real-life situations, collaborative problem solving –
the joint and shared activity of transforming a current problem state into a desired
goal state – can be regarded as one of the key skills in the 21st century. This chapter
has proposed a framework that breaks down collaborative problem solving skills
into a number of components. Most importantly, the social skills of collaboration
can be distinguished from the cognitive skills of problem solving. Within these sub-
groups, certain skill aspects can be identified. The framework draws on research
from several fields, and lays the ground for a deeper analysis of collaborative prob-
lem solving. One of the main purposes of this framework is to inform the design of
collaborative problem solving tasks that touch on as many of the identified skill sets
as possible. Once results from such tasks are available, testing of the theoretical
hypotheses underlying the framework can take place in order to validate or refine
the framework, thereby deepening our understanding of collaborative problem
solving.
54 F. Hesse et al.
References
Brodbeck, F. C., & Greitemeyer, T. (2000). Effects of individual versus mixed individual and group
experience in rule induction on group member learning and group performance. Journal of
Experimental Social Psychology, 36(6), 621–648.
Brown, A. (1987). Metacognition, executive control, self-regulation, and other more mysterious
mechanisms. In F. Reiner & R. Kluwe (Eds.), Metacognition, motivation, and understanding
(pp. 65–116). Hillsdale: Erlbaum.
Clark, H. H. (1996). Using language. Cambridge, MA: Cambridge University Press.
Clark, H. H., & Murphy, G. L. (1982). Audience design in meaning and reference. Advances in
Psychology, 9, 287–299.
Cohen, E. G. (1994). Restructuring the classroom: Conditions for productive small groups. Review
of Educational Research, 64, 1–35.
Crowston, K., Rubleske, J., & Howison, J. (2006). Coordination theory: A ten-year retrospective.
In P. Zhang & D. Galletta (Eds.), Human-computer interaction in management information
systems (pp. 120–138). Armonk: M.E. Sharpe.
De Wit, F. R. C., & Greer, L. L. (2008). The black-box deciphered: A meta-analysis of team diver-
sity, conflict, and team performance. In Academy of Management best paper proceedings,
Anaheim.
Dehler, J., Bodemer, D., Buder, J., & Hesse, F. W. (2011). Guiding knowledge communication in
CSCL via group knowledge awareness. Computers in Human Behavior, 27(3), 1068–1078.
Diehl, M., & Stroebe, W. (1987). Productivity loss in brainstorming groups: Toward the solution of
a riddle. Journal of Personality and Social Psychology, 53(3), 497–509.
Dillenbourg, P., Baker, M., Blaye, A., & O’Malley, C. (1996). The evolution of research on col-
laborative learning. In E. Spada & P. Reiman (Eds.), Learning in humans and machines:
Towards an interdisciplinary learning science (pp. 189–211). Oxford: Elsevier.
Doise, W., & Mugny, G. (1984). The social development of the intellect. Oxford: Pergamon Press.
Flavell, J. H. (1976). Metacognitive aspects of problem solving. In L. B. Resnick (Ed.), The nature
of intelligence (pp. 231–236). Hillsdale: Erlbaum.
Greeno, J. G. (1998). The situativity of knowing, learning, and research. American Psychologist,
53(1), 5–26.
Griffin, P. (2014). Performance assessment of higher order thinking. Journal of Applied
Measurement, 15(1), 1–16.
Gunzelmann, G., & Anderson, J. R. (2003). Problem solving: Increased planning with practice.
Cognitive Systems Research, 4, 57–76.
Gurtner, A., Tschan, F., Semmer, N. K., & Nägele, C. (2007). Getting groups to develop good
strategies: Effects of reflexivity interventions on team process, team performance, and shared
mental models. Organisational Behavior and Human Decision Processes, 102(2), 127–142.
Hastie, R., & Pennington, N. (1991). Cognitive and social processes in decision making. In L. B.
Resnick, J. M. Levine, & D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 308–
327). Washington, DC: American Psychological Association.
Hayes-Roth, B., & Hayes-Roth, F. (1979). A cognitive model of planning. Cognitive Science, 3,
275–310.
Higgins, E. T. (1981). Role taking and social judgment: Alternative developmental perspectives
and processes. In J. H. Flavell & L. Ross (Eds.), Social cognitive development: Frontiers and
possible futures (pp. 119–153). Cambridge, UK: Cambridge University Press.
Hinsz, V. B., Tindale, R. S., & Vollrath, D. A. (1997). The emerging conception of groups as infor-
mation processors. Psychological Bulletin, 121, 43–64.
Horton, W. S., & Keysar, B. (1996). When do speakers take into account common ground?
Cognition, 59, 91–117.
Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press.
Johnson, D., Johnson, R., & Holubec, E. (1998). Cooperation in the classroom. Boston: Allyn and
Bacon.
2 A Framework for Teachable Collaborative Problem Solving Skills 55
Jonas, E., Schulz-Hardt, S., Frey, D., & Thelen, N. (2001). Confirmation bias in sequential information
search after preliminary decisions: An expansion of dissonance theoretical research on selective
exposure to information. Journal of Personality and Social Psychology, 80(4), 557–571.
Karau, S. J., & Williams, K. D. (1993). Social loafing: A meta-analytic review and theoretical
integration. Journal of Personality and Social Psychology, 65(4), 681–706.
Klimoski, R., & Mohammed, S. (1994). Team mental model: Construct or metaphor? Journal of
Management, 20, 403–437.
Larson, J. R., Jr., & Christensen, C. (1993). Groups as problem-solving units: Toward a new mean-
ing of social cognition. British Journal of Social Psychology, 32, 5–30.
Laughlin, P. R., & Ellis, A. L. (1986). Demonstrability and social combination processes on math-
ematical intellective tasks. Journal of Experimental Social Psychology, 22, 177–189.
Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York:
Cambridge University Press.
Mayer, R. (1983). Thinking, problem solving, cognition. New York: W.H. Freeman and Company.
Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the structure of behaviour.
New York: Holt, Rinehart & Winston.
Nardi, B. A. (1996). Context and consciousness: Activity theory and human-computer interaction.
Cambridge, MA: MIT Press.
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs: Prentice-Hall.
Norton, R. (1975). Measurement of ambiguity tolerance. Journal of Personality Assessment, 39(6),
607–619.
OECD. (1999). Measuring student knowledge and skills: A new framework for assessment. Paris:
OECD.
Peterson, R. S., & Behfar, K. J. (2005). Leadership as group regulation. In D. M. Messick & R. M.
Kramer (Eds.), The psychology of leadership: New perspectives and research (pp. 143–162).
Mahwah: Erlbaum.
Piaget, J., & Inhelder, B. (1962). The psychology of the child. New York: Basic Books.
Polya, G. (1973). How to solve it. Princeton: Princeton University Press.
Roschelle, J. (1992). Learning by collaborating: Convergent conceptual change. The Journal of the
Learning Sciences, 2(3), 235–276.
Roschelle, J., & Teasley, S. (1995). The construction of shared knowledge in collaborative problem
solving. In C. E. O’Malley (Ed.), Computer supported collaborative learning (pp. 69–97).
Heidelberg: Springer.
Salomon, G. (Ed.). (1993). Distributed cognitions. Cambridge: Cambridge University Press.
Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. In
B. Smith (Ed.), Liberal education in a knowledge society (pp. 67–98). Chicago: Open Court.
Schoenfeld, A. H. (1999). Looking toward the 21st century: Challenges of educational theory and
practice. Educational Researcher, 28, 4–14.
Schulz-Hardt, S., & Brodbeck, C. F. (2008). Group performance and leadership. In M. Hewstone,
W. Stroebe, & K. Jonas (Eds.), Introduction to social psychology: A European perspective (4th
ed., pp. 264–289). Oxford: Blackwell.
Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational
Researcher, 27(2), 4–13.
Star, J. R., & Rittle-Johnson, B. (2008). Flexibility in problem solving: The case of equation solv-
ing. Learning and Instruction, 18(6), 565–579.
Stasser, G., & Titus, W. (1985). Pooling of unshared information in group decision making: Biased
information sampling during discussion. Journal of Personality and Social Psychology, 48(6),
1467–1478.
Stasser, G., & Vaughan, S. I. (1996). Models of participation during face-to-face unstructured dis-
cussion. In E. H. Witte & J. H. Davis (Eds.), Understanding group behavior: Consensual
action by small groups (Vol. 1, pp. 165–192). Mahwah: Erlbaum.
Steiner, I. D. (1972). Group processes and productivity. New York: Academic.
Thompson, L. L., Wang, J., & Gunia, B. C. (2010). Negotiation. Annual Review of Psychology, 61,
491–515.
56 F. Hesse et al.
Trötschel, R., Hüffmeier, J., Loschelder, D. D., Schwartz, K., & Gollwitzer, P. M. (2011).
Perspective taking as a means to overcome motivational barriers in negotiations: When putting
oneself into the opponent’s shoes helps to walk toward agreements. Journal of Personality and
Social Psychology, 101(4), 771.
Van Gundy, A. B. (1987). Creative problem solving: A guide for trainers and management.
Westport: Greenwood Press.
Van Knippenberg, D., & Schippers, M. C. (2007). Work group diversity. Annual Review of
Psychology, 58, 515–541.
Van Knippenberg, D., De Dreu, C. K. W., & Homan, A. C. (2004). Work group diversity and group
performance: An integrative model and research agenda. Journal of Applied Psychology, 89,
1008–1022.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes.
Cambridge, MA: Harvard University Press.
Wegner, D. M. (1986). Transactive memory: A contemporary analysis of the group mind. In
B. Mullen & G. R. Goethals (Eds.), Theories of group behavior (pp. 185–205). New York:
Springer.
Weinstein, E. A. (1969). The development of interpersonal competence. In D. A. Goslin (Ed.),
Handbook of socialization theory and research (pp. 753–775). Chicago: Rand McNally &
Company.
Weldon, E., & Weingart, L. R. (1993). Group goals and group performance. British Journal of
Social Psychology, 32(4), 307–334.
Wittenbaum, G. W., Hollingshead, A. B., & Betero, I. C. (2004). From cooperative to motivated
information sharing in groups: Moving beyond the hidden profile paradigm. Communication
Monographs, 71, 286–310.
Wood, W., Lundgren, S., Ouellette, J., Busceme, S., & Blackstone, T. (1994). Minority influence:
A meta-analytic review of social influence processes. Psychological Bulletin, 115, 323–345.
Zuckerman, M., Kernis, M. H., Guarnera, S. M., Murphy, J. F., & Rappoport, L. (1983). The ego-
centric bias: Seeing oneself as cause and target of others’ behavior. Journal of Personality,
51(4), 621–630.
Chapter 3
Assessment of Learning in Digital Networks
Mark Wilson and Kathleen Scalise
Abstract This chapter provides both conceptual and empirical information about
the skillset of Learning in Digital Networks – Information Communications
Technologies (LDN-ICT). Data are drawn from the pilot phase of the ATC21STM
project research and development process, and were collected from August to
November 2011 across Australia, Finland, Singapore and the U.S.A. (The acronym
ATC21STM has been globally trademarked. For purposes of simplicity the acronym
is presented throughout the chapter as ATC21S.) The paper concludes with a discus-
sion of ideas about reporting and use of the consequent development progression
which underlies the construct.
How to Assess Digital Learning
The ATC21S view of assessment is based on beliefs that the current practice of
schooling is outmoded in the global working environment. For example, Cisco,
Intel and Microsoft (2008) contrasted the typical context of student standardised
assessment – having students take tests individually – with a situation in the outside
world where people work both individually and in groups to share complimentary
skills and accomplish shared goals. A second difference between schooling and the
contemporary workplace arises from the nature of the test subjects themselves:
today, school subjects are divided by disciplinary boundaries, but in the workplace
this subject knowledge is applied across disciplinary boundaries in the process of
solving real world problems. Moreover, these problems are not solvable by simply
recalling facts or applying simple procedures, but are complex and ill-structured –
and set in specific concrete contexts. Finally, the traditional “closed book” testing
context is contrasted with a setting where people have access to a vast array of
M. Wilson (*) 57
University of California, Berkeley, CA, USA
e-mail: [email protected]
K. Scalise
University of Oregon, Eugene, OR, USA
© Springer Science+Business Media Dordrecht 2015
P. Griffin, E. Care (eds.), Assessment and Teaching of 21st
Century Skills, Educational Assessment in an Information Age,
DOI 10.1007/978-94-017-9395-7_3
58 M. Wilson and K. Scalise
information and technological tools, where the challenge is to strategically craft a
solution (CIM 2008).
The ATC21S project commissioned a series of “white papers” to help establish
this effort (now published in Griffin et al. 2012). Among them, the most important
for this chapter are the “skills paper” (Binkley et al. 2012), and the “methodology
paper” (Wilson et al. 2012). The first of these white papers lays out a scheme for
encompassing and understanding the nature of these “new” skills and the ways in
which they relate to traditional schools subjects. The scheme is referred to as
“KSAVE,” standing for Knowledge, Skills and Attitudes, Values and Ethics. Using
this scheme as a basis, two particular twenty-first century skills were chosen for
inclusion in an ATC21S assessment demonstration – collaborative problem solving
and LDN-ICT. The latter is the focus of this chapter, and our particular slant on that
will be described below. The second white paper lays out an approach to developing
the new assessments, based on the insights of a groundbreaking U.S. National
Research Council report (NRC 2001). The approach chosen is called the BEAR
Assessment System (BAS: Wilson 2005, 2009a; Wilson and Sloane 2000), and it
will not be detailed here other than to note that it is based on the following four
principles.
Principle 1: Assessment should be based on a developmental perspective of student
learning; the building block is a construct map of a progress variable that visu-
alizes how students develop and how we think about their possible changes in
response to items.
Principle 2: There must be a match between what is taught and what is assessed; the build-
ing block is the items design, which describes the most important features of
the format of the items—the central issue, though, is how the items design
results in responses that are related back to the levels of the construct map.
Principle 3: Teachers must be the managers of the system, with the tools to use it efficiently
and effectively; the building block is the outcome space, or the set of categories
of student responses that make sense to teachers.
Principle 4: There is evidence of quality in terms of reliability and validity studies and
evidence of fairness; the building block is a measurement model that provides
for multidimensional item responses and links over time, both longitudinally
within cohorts and across cohorts.
(Wilson 2009b)
How these principles become embedded in the process and the product of the
assessment development will be exemplified in the account below.
Learning in Networks: The Construct Map
The term “LDN-ICT” encompasses a wide range of subtopics, including learning in
networks, information literacy, digital competence and technological awareness, all
of which contribute to learning to learn through the development of enabling skills.
In the current global economy, learning through digital networks, and the use of
digital media, is becoming increasingly important in private life, in learning and in
professional life. We predict that this aspect of learning will become very important
3 Assessment of Learning in Digital Networks 59
in the future. We see this as being true at the individual level and local or regional
levels as well as at international levels.
For the ATC21S project, the focus of LDN-ICT was on learning in digital net-
works, which was seen as being made up of four strands:
• Functioning as a consumer in networks;
• Functioning as a producer in networks;
• Participating in the development of social capital through networks;
• Participating in intellectual capital (i.e., collective intelligence) in networks.
In our view, LDN-ICT involves thinking across platforms and hardware imple-
mentations, and also thinking outside the computer itself, to other devices and uses
of technology.
The Four Strands
The four strands mentioned above are seen as interacting together in the activity of
learning in networks. They are conceptualised as parallel developments that are
interconnected and make up that part of LDN-ICT that is concerned with learning
in networks.
First, functioning as a Consumer in Networks (CiN) involves obtaining, manag-
ing and utilizing information and knowledge from shared digital resources and
experts in order to benefit private and professional lives. It involves questions
such as:
• Will a user be able to ascertain how to perform tasks (e.g. by exploration of the
interface) without explicit instruction?
• How long will it take an experienced user to find an answer to a question using
their mobile device?
• What arrangement of information on a display yields a more effective visual
search?
• How difficult will it be for a user to find information on a website?
Second, functioning as a Producer in Networks (PiN) involves creating, develop-
ing, organizing and re-organizing information/knowledge in order to contribute to
shared digital resources.
Third, developing and sustaining Social Capital through Networks (SCN)
involves using, developing, moderating, leading and brokering the connectivities
within and between individuals and social groups in order to marshal collaborative
action, build communities, maintain an awareness of opportunities and integrate
diverse perspectives at community, societal and global levels.
Fourth, developing and sustaining Intellectual Capital through Networks (ICN)
involves understanding how tools, media and social networks operate and using
appropriate techniques through these resources to build collective intelligence and
integrate new insights into personal understandings.
60 M. Wilson and K. Scalise
In Tables 3.1, 3.2, 3.3, and 3.4, levels of these four strands have been described
as hypothesized construct maps showing an ordering of skills or competencies
involved in each. At the lowest levels of each are the competencies that one would
expect to see exhibited by a novice or beginner. At the top of each table are the
competencies that one would expect to see exhibited by an experienced person –
someone who would be considered very highly literate in LDN-ICT. These con-
struct maps are hierarchical in the sense that a person who would normally exhibit
competencies at a higher level would also be expected to be able exhibit the compe-
tencies at lower levels of the hierarchy. The maps are also probabilistic in the sense
that they represent different probabilities that a given competence would be expected
to be exhibited in a particular context rather than certainties that the competence
would always be exhibited.
These levels may be “staggered” in the sense that they have not been positioned
on the same fixed scale for each strand. We see them as strands of the same broad
construct – LDN-ICT – but the lower levels of one strand may be equivalent to the
middle or even higher levels of other strands. This concept is represented in Fig. 3.1.
It should also be noted that these construct maps were developed to encompass the
full range of competencies within each strand rather than the range that one might
expect to be exhibited by school students at middle and secondary levels. The ques-
tion of targeting assessments to match what students can do is an empirical question
to be determined through consultations with teachers and cognitive laboratories
with students, as well as the results of pilot and field studies.
Table 3.1 Functioning as a Consumer in Networks (CiN)
CiN3 Consumer in networks
CiN2
CiN1 Discriminating consumer
Effectively judges credibility of sources/people
Integrates information in coherent knowledge framework
Conducts searches suited to personal circumstances
Filters, evaluates, manages, organises and reorganises
information/people
Has little or no concept of credibility
Selects optimal tools for tasks/topics
Conscious consumer
Selects appropriate tools and strategies (strategic competence)
Constructs targeted searches
Compiles information systematically
Knows that credibility is an issue (web pages, people,
networks)
Emerging consumer
Performs basic tasks
Has no concept of credibility
Searches for pieces of information using common search
engines (e.g. movie guides)
Knows that tools exist for networking (e.g. Facebook)
3 Assessment of Learning in Digital Networks 61
Table 3.2 Functioning as a Producer in Networks (PiN)
PiN3 Producer in networks
PiN2 Creative producer
PiN1 Possesses team-situational awareness in process
Optimises assembly of distributed contribution to products
Extends advanced models (e.g. business models)
Produces attractive digital products using multiple technologies/tools
Chooses among technological options for producing digital products
Functional producer
Establishes and manages networks & communities
Possesses awareness of planning for building attractive websites, blogs, games
Organizes communication within social networks
Develops models based on established knowledge
Develops creative & expressive content artifacts
Possesses awareness of security & safety issues (ethical and legal aspects)
Uses networking tools and styles for communication among people
Emerging producer
Produces simple representations from templates
Starts an identity
Uses a computer interface
Posts an artifact
Table 3.3 Developing Social Capital through Networks (SCN)
SCN4 Developer of social capital
SCN3
Visionary connector
SCN2 Takes a cohesive leadership role in building a social enterprise
SCN1 Reflects on experience in social capital development
Proficient connector
Initiates opportunities for developing social capital through networks (e.g. support for
development)
Encourages multiple perspectives and supports diversity in networks (social brokerage
skills)
Functional connector
Encourages participation in and commitment to a social enterprise
Possesses awareness of multiple perspectives in social networks
Contributes to building social capital through a network
Emerging connector
Participates in a social enterprise
Is an observer or passive member of a social enterprise
Knows about social networks
62 M. Wilson and K. Scalise
Table 3.4 Developing Intellectual Capital through Networks (ICN)
ICN4 Participant in intellectual capital (collective intelligence)
ICN3 Visionary builder
Questions existing architecture of social media and develops new architectures
ICN2 Functions at the interfaces of architectures to embrace dialogue
ICN1 Proficient builder
Understands and uses architecture of social media such as tagging, polling,
role-playing and modelling spaces to link to knowledge of experts in an area
Identifies signal versus noise in information
Interrogates data for meaning
Makes optimal choice of tools to access collective intelligence
Shares and reframes mental models (plasticity)
Functional builder
Acknowledges multiple perspectives
Uses thoughtful organization of tags
Understands mechanics of collecting and assembling data
Knows when to draw on collective intelligence
Shares representations
Emerging builder
Possesses knowledge of survey tools
Is able to make tags
Posts a question
Learning in Networks: Three Scenarios
The Berkeley Evaluation and Assessment Research (BEAR) Center at UC Berkeley
developed three scenarios in which to place tasks and questions that could be used
as items to indicate where a student might be placed along each of the four strands.
Each scenario was designed to address more than one strand, but there were differ-
ent emphases in how the strands were represented among the scenarios. Where pos-
sible, we took advantage of existing web-based tools for instructional development.
These are each briefly described below.
Arctic Trek
One potential mechanism for the assessment of student ability in the learning net-
work aspect of LDN-ICT is to model assessment practice through a set of exemplary
classroom materials. The module that has been developed is based on the Go North/
Polar Husky information website (www.polarhusky.com) run by the University of
Minnesota (see Fig. 3.2). The Go North website is an online adventure learning
project based around arctic environmental expeditions. The website is a learning
hub with a broad range of information and many different mechanisms to support
3 Assessment of Learning in Digital Networks 63
Fig. 3.1 The four strands of LDN-ICT, represented as a four-part learning progression
Fig. 3.2 Two screen-shots from the Go-North! Website
64 M. Wilson and K. Scalise
networking with students, teachers and experts. LDN-ICT resources developed for
this module focus mainly on the functioning as a Consumer in Networks strand. The
tour through the site for the ATC21S demonstration scenario is conceived as a “col-
laboration contest,” or virtual treasure hunt. The Arctic Trek scenario views social
networks through LDN-ICT as an aggregation of different tools, resources and peo-
ple that together build community in areas of interest. In this task, students in small
teams ponder tools and approaches to unravel clues through the Go North site by
touring scientific and mathematics expeditions of actual scientists. The task helps
teachers model ways to integrate technology across different subjects. It also shows
how the Go North site focuses on space to represent itself, and how this can be com-
bined with tools that utilize texting, chat and dialogue as forms of LDN-ICT.
Webspiration
In the second demonstration task, framed as part of a poetry work unit, students of
ages 11–15 read and analyse well-known poems. In a typical school context, we
might imagine that a teacher notices that his or her students are having difficulty
articulating the moods and meanings of some poems – in traditional teacher-centered
instruction on literature the student role tends to be passive. Often, teachers find that
students are not spontaneous in their responses to poems but tend to wait to hear
what the teacher has to say, and then agree with it. To help encourage students to
formulate their own ideas on the poems, we use a collaborative graphic organiser
through the Webspiration online tool. The teacher directs the students to use
Webspiration to create an idea map – collaboratively using the graphic organizer
tools – and to analyze each poem they read. Students submit their own ideas and/or
build on classmate thoughts. Figure 3.3 shows a sample screen from the computer
module.
Second Language Chat
This scenario was developed as a peer-based second language learning environment
through which students interact in learning. Developing proficiency in a second
language (as well as in the mother tongue) requires ample opportunities to read,
write, listen and speak. This assessment scenario asks students to set up a technol-
ogy/network-based chat room, invite participants and facilitate a chat – in two lan-
guages. It also involves evaluating the chat and working with virtual rating systems
and online tools such as spreadsheets. The welcome screen for this scenario is
shown in Fig. 3.4. “Conversation partner” language programs such as this have
sprung up worldwide in recent years. They bring together students wishing to prac-
tise a language with native speakers, often in far-flung parts of the world. The cul-
tural and linguistic exchanges that result demonstrate how schools can dissolve the
3 Assessment of Learning in Digital Networks 65
Fig. 3.3 A sample page from the Webspiration scenario
physical boundaries of walls and classrooms. They also tap rich new learning spaces
through the communication networks of LDN-ICT. This task shows how they can
also provide ample assessment opportunities in digital literacy.
Sample Tasks from Arctic Trek
The welcome screen from Arctic Trek is shown in Fig. 3.5. The student goal is to
discover answers to 6 questions and each student must join a team to do this (see
Fig. 3.6). Once the team is assembled, it must assign roles to each team member
(Figs. 3.7 and 3.8). There is also a Team Notebook where its findings will be
recorded (Fig. 3.9). The team then finds out about the contest (Fig. 3.10). There is a
practice first – members must use the web resources listed in the right-hand panel to
answer the question (Fig. 3.11). If a student cannot write down a response, then he
or she can request a hint (and this can be repeated). The hints appear at the bottom
of the screen (Fig. 3.12). If the hints are not enough (and eventually they do virtually
tell the student what to do) then the student may request teacher assistance by hit-
ting the “T” button at the bottom right-hand corner, but when that happens, the
teacher must fill in an information box (Fig. 3.13). A real task is shown (partially)
66 M. Wilson and K. Scalise
Fig. 3.4 The welcome page from the Two-language chat scenario
in Fig. 3.14 – student foraging in an online display. Here the student has been asked
to examine a map that shows where polar bears are found and must describe the way
the information is conveyed on the map.
Samples of student Team Notebooks are shown in Figs. 3.15 and 3.16. The first,
Notebook A (from a group of 15-year-olds), shows clear role-selection, responses
to the clues and explanations of response choice. The second, Notebook B (from a
group of 11-year-olds), shows a very different team response – mainly arguing
about roles. In this case, the responses to the questions are missing. Samples of data
codes from two different teams are shown in Figs. 3.17 and 3.18. In the top panel of
Fig. 3.17, the data codes show that Team #1 (a) successfully retrieved the team code,
and (b) successfully accessed the shared notebook. They also show that (c) the team
successfully assigned team roles, and there was consensus among the team mem-
bers about those roles. In the lower panel of Fig. 3.17, the data codes show that
3 Assessment of Learning in Digital Networks 67
Fig. 3.5 The welcome screen from Arctic Trek
Fig. 3.6 Meeting the team
Team #1 (d) gave the correct answer for the number of colours, and (e) correctly
listed the colours, and noted the issue about missing data. It also shows (f) that they
used no hints or teacher assistance, and (g) that their self-evaluation of their
collaboration was “Good.” The account of Team #2, as shown in the data codes, is
very different. In the top panel of Fig. 3.18, the data codes show that Team #2 (a)
did not retrieve the team code, but (b) did successfully access the shared notebook.
68 M. Wilson and K. Scalise
Fig. 3.7 Setting up the team roles
Fig. 3.8 Person 1 has been assigned as “Recorder”
They also show that (c) the team was unsuccessful in assigning team roles, and that
there was no consensus among the team members about those roles. In the lower
panel of Fig. 3.18, the data codes show that Team #2 (d) gave the correct answer for
the number of colors, and (e) they compared answers, but did not note the issue
about missing data. It also shows (f) that they used no hints or teacher assistance,
and (g) that their self-evaluation of their collaboration was “Great” because “every-
one in my group agreed.”
3 Assessment of Learning in Digital Networks 69
Fig. 3.9 Setting up the shared Team Notebook
Fig. 3.10 The collaboration contest
The Outcome Space for the Three Scenarios
Each item was developed to target one or more of the four strands, and the expected
range of levels that would be represented in the item responses was also noted.
Where the responses are selected from a fixed set (as in a multiple-choice item), this
70 M. Wilson and K. Scalise
Fig. 3.11 An opportunity to practice
Fig. 3.12 A hint
can be planned ahead of time, but for open-ended items, this is something that needs
to be empirically investigated. The tabulation is shown in Table 3.5. As can be seen,
the first three levels were reasonably well covered, but Level 4, which we expect to
see seldom for students in this population, had only one instance.
3 Assessment of Learning in Digital Networks 71
Fig. 3.13 The teacher aid box
Fig. 3.14 The third clue
72 M. Wilson and K. Scalise
Fig. 3.15 Sample notebook A
Fig. 3.16 Sample notebook B
3 Assessment of Learning in Digital Networks 73
Fig. 3.17 Sample collaboration #1
Samples of teachers in Australia, Finland, Singapore and the United States were
asked to provide feedback about draft tasks for LDN-ICT. Those teachers were pro-
vided with access through a teacher interface and for each set of tasks they were
asked a set of questions to consider. These questions included:
For Webspiration
What skills or capabilities do you think the tasks are targeting?
Considering the capabilities of your students, are there any questions or activities
that should be eliminated from this scenario, for students of specified ages (11,
13 and 15 years).
74 M. Wilson and K. Scalise
Fig. 3.18 Sample collaboration #2 (Note that the locations of points “a” through “g” in the text are
equivalent to those for Fig. 3.17)
For Arctic Trek
Identify and write down two clues to retain and two clues to eliminate from the task
for students of specified ages.
3 Assessment of Learning in Digital Networks 75
Table. 3.5 The number of data points from each scenario and their planned allocation to the levels
from each strand
ICT literacy – learning in digital networks
Construct/learning outcomes
Levelsa Social Intellectual
(progressive) Consumer Producer capital capital Total
Level 4 N/A N/A Web 0 Web 0 Web 0
Arctic 1 Arctic 0 Arctic 1
2LChat 0 2LChat 0 2LChat 0
Level 3 Web 0 Web 0 Web 0 Web 10 Web 10
Arctic 2 Arctic 2 Arctic 6 Arctic 2 Arctic 12
2LChat 0 2LChat 0 2LChat 1 2LChat 1 2LChat 2
Level 2 Web 8 Web 4 Web 7 Web 6 Web 25
Arctic 6 Arctic 16 Arctic 0 Arctic 7 Arctic 29
2LChat 0 2LChat 8 2LChat 6 2LChat 0 2LChat 14
Level 1 Web 2 Web 4 Web 1 Web 2 Web 9
Arctic 2 Arctic 0 Arctic 0 Arctic 2 Arctic 4
2LChat 2 2LChat 6 2LChat 6 2LChat 0 2LChat 14
Total Web 10 Web 8 Web 8 Web 18 Web 44
Arctic 10 Arctic 18 Arctic 7 Arctic 11 Arctic 46
2LChat 2 2LChat 14 2LChat 13 2LChat 1 2LChat 30
aSome CR items (constructed response) will measure up through the listed level (listed level is top
score)
For Language Chat
At what age do you believe native speakers would be able to learn and use a rating
system?
At what age would native speakers be able to facilitate a chat topic?
Suggest a chat topic for language learners at the selected age that has the potential
to engage them.
Cognitive laboratories, which involve small samples of students who attempt the
tasks and respond to questions about them, were also carried out in the four coun-
tries on all three task demonstrations. Information from these two sources contrib-
uted to the final editing of the tasks, and to the compilation of the information in
Table 3.5.
Results from the Pilot Study
In the pilot study, two of the three scenarios were selected for further studies with
students: the science/math Arctic Trek collaboration contest and the Webspiration
shared literature analysis task. These were identified by participating countries as
76 M. Wilson and K. Scalise
the most desirable to pilot at this time, for several reasons. These included that they
were more aligned with traditional school systems in the countries, which rarely
used cross-country chat tools in the classroom but sometimes did employ math sim-
ulations and online scientific documents as well as graphical and drawing tools for
student use. By contrast, the third task – the Second Language Chat – was described
by participating countries, teachers and schools as a forward-looking, intriguing
scenario, but farther away on the adoption curve for school-based technology.
Not all of the planned automated scoring and data analysis for the items in the
two piloted scenarios has been applied to this data set, as the total number of cases
was too small for the empirically-based scoring to be successfully calibrated. This
will be completed when larger data sets are available. Each of the two scenarios was
presented in three forms, for 11, 13 and 15 year-olds respectively, with a subset of
common items across the three forms. Due to the nature of the pilot study data
design, results for the two scenarios are reported separately. The data were analysed
using a partial credit item response model (Masters 1982), and the estimation soft-
ware was ConQuest 2.0 (Wu et al. 2007).
For the Webspiration scenario, 176 cases were collected across Australia,
Finland, Singapore and the U.S.A. Approximately 90 % of the items were auto-
scored and 10 % were hand-scored (by trained scorers using a common scoring
guide). There are 61 items in the three forms, and 16 are common across all forms.
Approximately 10 % of the items showed significant misfit – these items will be
retained for further examination in the field test. The reliability was estimated at
0.93 using the EAP formulation (Wu et al. 2007). The Wright Map, showing how
items compare to students on the composite Learning in Networks latent variable is
shown in Fig. 3.19.
Note that, due to the small number of cases available at this point, the four strands
are all mapped onto the same composite variable. With a greater number of sample
cases, this will be investigated using a multidimensional model. The map shows that
students are reasonably well-matched by the range of item difficulties. Examination
of the match between empirical locations of the item responses and the four strand
construct maps resulted in a segmentation of the variable into five levels that cor-
respond quite well with the planned levels.
The five levels are indicated by the alternating yellow and white bands in
Fig. 3.19. The lowest two bands are associated with the first level of the strand con-
struct maps. In the lowest band, students are required to move information (e.g., cut/
paste, drag/drop, texting), ask simple questions, and begin to use rankings to arrange
crowd-sourced information. In the second band, they correctly access team and
individual pages and begin to discriminate among the crowd-sourced information
provided. The third band is associated with the second levels of the strand construct
maps: students search for targeted information, create links to displayed ideas, and
use context to discriminate crowd-sourced information. The fourth band also is
associated with the second level of the strand construct maps: students access digital
tools and resources available in the environment, and select/share tagged ideas. The
highest band is associated with the third level of the strand construct maps: students
create explanations in new media and use tools to share products with others in new
3 Assessment of Learning in Digital Networks 77
Fig. 3.19 Variable map for composite construct using the Webspiration scenario
interfaces. As expected, this highest level is rarely seen in the data for the sample
population assessed in the tasks to date.
For the Arctic Trek scenario, 135 cases were collected across Australia, Finland
and the U.S.A. Approximately 84 % of the items were auto-scored and 16 % were
hand-scored (again, by trained scorers using a common scoring guide). There are 25
items in the three forms, and 20 are common across all forms. Approximately 8 %
of the items showed significant misfit – these items will be retained for further
examination in the field test. The reliability was estimated at 0.88 using the EAP
formulation (Wu et al. 2007). The Wright Map for the Arctic Trek data yielded simi-
lar results to the map in the Webspiration case.
In summary, these preliminary results show that it is indeed feasible to collect
data on a new variable such as Learning in Networks, and to do so using innovative
78 M. Wilson and K. Scalise
item types that encompass web resources. The reliability coefficients that were
observed are quite strong, even though the number of items in Arctic Trek was not
very large. The good match between the expected levels of response and the empiri-
cal results indicates quite sound levels of internal structure validity.
Conclusion and Next Steps
Measuring collaborative digital literacy as described here is helping us understand
how students think and work differently than in previous decades. Accessing, using,
and creating information and knowledge digitally employs many important skills
needed today for career and college readiness. This chapter describes a domain
modelling process for assessing these skills through the BEAR assessment system,
along with examples of task development, and results from implementation of a
pilot study in four countries.
However, the domain modelling process is as yet incomplete for this set of con-
structs. The hypothesis indicated in Fig. 3.1 has not yet been properly tested (that
will need to wait until we have a larger data set from field trials) and, indeed, the
final form of the hypothesised structure is also incomplete. What is as yet missing
is a next level of elaboration of the learning progression, which is characterised by
hypothesised links between the levels of different constructs. The substantive and
empirical discovery process that establishes these hypotheses is not yet complete,
but the full diagram will be more like the one shown in Fig. 3.20. This learning
progression is from a separate project, the Assessing Data Modeling project (Lehrer
et al. 2014). In this project, there are seven constructs, shown here as the vertical
sets of blocks (each block representing a level of the construct). Between some
levels of some constructs are arrows, which indicate hypothesised hierarchical links
between those levels. The probability of students being observed in the target level
(i.e., the level the arrow points to) is expected to be very low, unless they have
already shown evidence of being at the source level (i.e., at the other end of the
arrow). This presentation allows the incorporation of interesting educational infor-
mation about how students are expected to progress through the skills and knowl-
edge defined in through the learning progression.A hypothetical learning progression
for Learning in Networks is shown in Fig. 3.21. Statistical models to estimate these
links are currently being developed (Wilson 2012) and will be available for use
when field test data is collected.
The participating ATC21S countries through the first phases of the project have
helped illustrate how their teachers and school systems support students to develop
twenty-first century competencies. Conclusions from the pilot studies show that
students in the 11–15 year age group demonstrate widely differing knowledge and
skills in these areas. Some are only beginning to take their first tentative steps toward
digital competence while others exhibit quite breathtaking levels of mastery, such as
the ability to collaborate seamlessly to create in mere moments insightful audio
commentaries and share them for common understanding. Differences in what
3 Assessment of Learning in Digital Networks 79
Fig. 3.20 An example learning progression diagram from the ADM project
students can do, and the absence of formal teaching and opportunities to learn these
skills, point to a rapidly widening gap between important LDN-ICT skills and what
schools offer. ATC21S results are showing this to be particularly true when collabo-
ration, creation and problem-solving are involved, based on such early assessment
efforts as described here.
The next steps for ATC21S involve wide-scale fieldwork trials for a segment of the
tasks, currently drawn from the collaborative problem-solving domain, now being con-
ducted in Australia, Finland, Singapore and the U.S.A. Associate countries Costa Rica
and the Netherlands are joining in to help test how language and culture affect twenty-
first-century teaching and assessments. The digital literacy domain tasks described
here are being used to explore the language and culture localisation process.
The final phase of the project will place the ATC21S resources in the public
domain. This will allow government policy-makers, teachers, school systems and
assessment institutions to download, modify and extend existing research and mate-
rials. This may help to bring more broadly the twenty-first-century skill domains
described here into classrooms around the world. Certainly an important contribu-
tion is to encourage more conversation on how information-age trends do not stop
at the school door.
Level ICN 80 M. Wilson and K. Scalise
Description
Level SCN Praesent interdum magna
Description D vitae est posuere, non
Level PiN Donec aliquet pulvinar est. faucibus.
Description D Nullam scelerisque risus
Vivamus vel nibh felis.
Nam at nulla arcu. sed est porttitor. C Vivamus pharetra urna
C Pellentesque fermentum
Vestibulum laoreet dui nisi, sollicitudin pretium porta.
enim turpis. quis posuere odio faucibus
C nec. Nullam lacinia tempus Pellentesque acfringilla est.
porttitor. Nunc semper nunc ligula,
B consequat auctor ligula
feugiat sit amet.
Level CiN Lorem ipsum dolor sit Lorem ipsum dolor sit Pellentesque ac fringilla est.
Description
amet, consectetur B amet, consectetur A Nunc semper nunc ligula,
B adipiscing elit. Nullam adipiscing elit. Donec consequat auctor ligula
Lorem ipsum dolor sit
C amet, consectetur turpis metus. ultricies enim. feugiat sit amet.
adipiscing elit. Praesent interdum magna Pellentesque habitant
A morbi tristique senectus et
Cras males uada tincidunt vitae est posuere, non
B tincidunt. Cras facilisis netus et malesuada.
A faucibus dolor auctor. Proin
fringilla neque nec auctor. tempor nisi quis leo
Maecenas euismod duinon adipiscing.
A Ieo pretium.
Fig. 3.21 A hypothetical learning progression for learning in digital networks
3 Assessment of Learning in Digital Networks 81
Acknowledgements We thank the ATC21S project and its funders for their support for the work
reported here. We also acknowledge the expertise and creative input of the ATC21S Expert Panel
in LDN-ICT: John Ainley (Chair), Julian Fraillon, Peter Pirolli, Jean-Paul Reeff, Kathleen Scalise
and Mark Wilson. Of course, the views and opinions expressed in this paper are those of the
authors alone.
References
Binkley, M., Erstad, O., Herman, J., Raizen, S., Ripley, M., Miller-Ricci, M., & Rumble, M.
(2012). Defining twenty-first century skills. In P. Griffin, B. McGaw, & E. Care (Eds.),
Assessment and teaching of 21st century skills. Dordrecht: Springer.
Cisco, Intel, Microsoft (CIM). (2008). Transforming Education: Assessing and Teaching 21st
Century Skills. Authors. Downloaded from: http://atc21s.org/wp-content/uploads/2011/04/
Cisco-Intel-Microsoft-Assessment-Call-to-Action.pdf
Griffin, P., McGaw, B., & Care, E. (Eds.). (2012). Assessment and teaching of 21st century skills.
Dordrecht: Springer.
Lehrer, R., Kim, M.-J., Ayers, E., & Wilson, M. (2014). Toward establishing a learning progression
to support the development of statistical reasoning. In J. Confrey & A. Maloney (Eds.),
Learning over time: Learning trajectories in mathematics education. Charlotte, NC:
Information Age Publishers.
Masters, G. (1982). A Rasch model for partial credit scoring. Psychometrika, 47(2), 149–174.
National Research Council (NRC). (2001). In Committee on the Foundations of Assessment,
J. Pellegrino, N. Chudowsky, & R. Glaser (Eds.), Knowing what students know: The science
and design of educational assessment. Washington, DC: National Academy Press.
Wilson, M. (2005). Constructing measures: An item response modeling approach. Mahwah:
Erlbaum.
Wilson, M. (2009a, December). Assessment for learning and for accountability. Paper presented at
the exploratory seminar: Measurement challenges within the race to the top agenda, at ETS,
Princeton, NJ.
Wilson, M. (2009b, December). Assessment for learning and for accountability. Policy brief from
the exploratory seminar: Measurement challenges within the race to the top agenda, at ETS,
Princeton, NJ.
Wilson, M. (2012). Responding to a challenge that learning progressions pose to measurement
practice: Hypothesized links between dimensions of the outcome progression. In A. C. Alonzo
& A. W. Gotwals (Eds.), Learning progressions in science. Rotterdam: Sense Publishers.
Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment system.
Applied Measurement in Education, 13(2), 181–208. Download from: http://www.informaworld.
com/smpp/content~content=a783685281~db=all
Wilson, M., Bejar, I., Scalise, K., Templin, J., Wiliam, D., & Torres-Irribarra, D. (2012).
Perspectives on methodological issues. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment
and teaching of 21st century skills. Dordrecht: Springer.
Wu, M., Adams, R., Wilson, M., & Haldane, S. (2007). ConQuest: Generalised item response
modelling software (Version 2.0). Camberwell: ACER Press.
Part III
Delivery of Collaborative Tasks, Their
Scoring, Calibration and Interpretation
In Chap. 4, Care, Griffin, Scoular, Awwal and Zoanetti (2015) describe the prototype
collaborative problem solving tasks. The tasks were constructed as games through
which students collaborate to solve problems or learn. The tasks can be divided into
two types. There are those that are curriculum independent and those that are cur-
riculum dependent. In addition there are those that are symmetric and those that are
asymmetric. There are tasks that are presented as a single internet web page and
tasks that are multipage. Multipage tasks were designed to become increasingly dif-
ficult and complex with increasing numbers of pages. In Chap. 5, Awwal, Griffin
and Scalise (2015) describe the delivery platform which houses the tasks and con-
trols access, security and data collection. The links between the task bank and the
collaborative allocation enables contributions of individual students to be deter-
mined. In Chap. 6, Adams, Vista, Scoular, Awwal, Griffin and Care (2015) describe
how the data is collected. They also describe how the platform is used to apply the
coding and scoring algorithms to produce the student performance reports for teach-
ers. Griffin, Care and Harding (2015, Chap. 7) demonstrate how the data is inter-
preted and how scores are calibrated using item response modelling, and they
present the dimensions of the domains. The chapter presents evidence of the con-
struct validity, stability of indicators across systems of education, across curricula,
across languages, and provides evidence that the construct being measured across
those contexts is a constant. Details of tasks are provided in terms of the structure,
symmetry and complexity and an increasing shift towards human to human interac-
tion on the Internet.
Care, E., Griffin, P., Scoular, C., Awwal, N., & Zoanetti, N. (2015).
Collaborative problem solving tasks. In P. Griffin & E. Care (Eds.), Assessment
and teaching of 21st century skills: Methods and approach (pp. 85–104).
Dordrecht: Springer.
Awwal, N., Griffin, P., & Scalise, S. (2015). Platforms for delivery of collabora-
tive tasks. In P. Griffin & E. Care (Eds.), Assessment and teaching of 21st century
skills: Methods and approach (pp. 105–113). Dordrecht: Springer.
84 III Delivery of Collaborative Tasks, Their Scoring, Calibration and Interpretation
Adams, R., Vista, A., Scoular, C., Awwal, N., Griffin, P., & Care, E. (2015).
Automatic coding procedures for collaborative problem solving. In P. Griffin &
E. Care (Eds.), Assessment and teaching of 21st century skills: Methods and
approach (pp. 115–132). Dordrecht: Springer.
Griffin, P., Care, E., & Harding, S. (2015). Task characteristics and calibration.
In P. Griffin & E. Care (Eds.), Assessment and teaching of 21st century skills:
Methods and approach (pp. 113–178). Dordrecht: Springer.
Chapter 4
Collaborative Problem Solving Tasks
Esther Care, Patrick Griffin, Claire Scoular,
Nafisa Awwal, and Nathan Zoanetti
Abstract This chapter outlines two distinct types of collaborative problem solving
tasks – content-free and content-dependent – each allowing students to apply differ-
ent strategies to solve problems collaboratively. Content-free tasks were developed
to emphasise the enhancement of inductive and deductive thinking skills. Content-
dependent tasks allow students to draw on knowledge gained through traditional
learning areas or subjects within the curriculum. The collaborative problem solving
framework emphasises communication for the purpose of information gathering,
identification of available and required information, identification and analysis of
patterns in the data, formulation of contingencies or rules, generalisation of rules,
and test hypotheses. Characteristics of tasks which were identified as appropriate
for eliciting collaborative problem solving processes are reported and illustrated by
exemplar items.
Introduction
This chapter demonstrates how the collaborative problem solving (CPS) frame-
work, outlined in Hesse et al. (2015; Chap. 2), is applied to a selection of tasks and,
in turn, how each of the tasks highlights the skills outlined in the framework. There
are two distinct types of tasks presented here: content-free and content-dependent.
Content-free tasks do not demand any prerequisite knowledge such as might be
taught in traditional school-based subjects but rely on the application of reasoning.
Content-dependent tasks draw on skills and knowledge derived from
The views expressed N. Zoanetti, in this chapter are those of the author and do not necessarily
reflect the views of the Victorian Curriculum and Assessment Authority.
E. Care (*) • P. Griffin • C. Scoular • N. Awwal
Assessment Research Centre, Melbourne Graduate School of Education, University of
Melbourne, Parkville, VIC, Australia
e-mail: [email protected]
N. Zoanetti
Victorian Curriculum and Assessment Authority, East Melbourne, VIC, Australia
© Springer Science+Business Media Dordrecht 2015 85
P. Griffin, E. Care (eds.), Assessment and Teaching of 21st
Century Skills, Educational Assessment in an Information Age,
DOI 10.1007/978-94-017-9395-7_4
86 E. Care et al.
curriculum-based work. As discussed in Hesse et al. (2015), under the proposed
CPS framework there are three strands of indicators that summarise social skills and
reflect the collaborative aspect of problem solving: participation, perspective taking,
and social regulation. Participation is the foundation for engaging with the task and
other collaborators, and is reflected in the way people act or interact to complete
tasks. Perspective taking skills emphasise the quality of interaction between stu-
dents, reflecting the level of student’s awareness of their collaborators’ knowledge
and resources as well as their responding skills. Social regulation refers to the strate-
gies used by students when collaborating, such as negotiating, taking initiative, self-
evaluating and taking responsibility. Cognitive skills are of equal importance within
this framework and are similar to those employed in independent problem solving
tasks. Indicators of such skills can be summarised under two headings: task regula-
tion and knowledge building. Task regulation refers to the ability of students to set
goals, manage resources, analyse and organise the problem space, explore a prob-
lem systematically, aggregate information and tolerate ambiguity. Knowledge
building is concerned with a student’s ability to understand the problem and to test
hypotheses. Knowledge building is underpinned by skills such as planning and exe-
cuting, and reflecting and monitoring.
In teaching students how to become better problem solvers, a common constraint
in traditional test design has been that the attainment of the solution is the sole
criterion from which inferences can be made. This has occurred despite the fact that
procedural aspects of problem solving have been considered important for some
time (Polya 1945, 1957; Garofalo and Lester 1985; Schoenfeld 1985). Within the
ATC21S project1 there is an increased focus on drawing inferences about how (and
how well) students solve problems, as opposed to simply asking whether they are
solving them. Problem solving has sequential phases or steps, such as understand-
ing, planning, solving and checking, that are universally applicable across tasks and
contexts. This information, together with information on student collaborative
effort, might better support the decisions an educator must make when determining
the instructional needs of individual students (Zoanetti 2010). Although goal-
attainment is obviously important, it should not be the only criterion of interest.
Educators stand to benefit from inferences about procedural quality when determin-
ing how best to improve student problem solving.
Problem and Task Characteristics
The differences between real-world problems and problems as they are often
analysed in psychological research raise the question of whether the assessment
of collaborative problem solving through well-defined problems is useful. A
“well-defined” problem is one in which the guiding question and consequently the
1 The acronym ATC21STM has been globally trademarked. For purposes of simplicity the acronym
is presented throughout the chapter as ATC21S.
4 Collaborative Problem Solving Tasks 87
goal is known, where the elements or “artefacts” that are salient to the solution are
known and present, and where the required processes to reach solution are under-
stood. Such problems are amenable to measurement since they involve specific
known steps, and have final correct solutions. Use of these types of problems also
lend themselves to teaching since a sequence of steps is often clear. Well-defined
tasks are typically found within the science and mathematics curriculum. On the
other hand, “ill-defined” problems are characterised by ambiguity. They may relate
to everyday problems and are not domain-specific; they may draw on many differ-
ent types of knowledge. They will have many of the characteristics that are associ-
ated with what is known as “wicked” problems. These are problems in the real sense
of the word – situations for which a solution is unknown, of which the elements or
components are not identified, and concerning which useful processes have not
been verified. Consequently, for ill-defined tasks there may be several solutions that
are appropriate to different degrees, several solution paths or strategies, and it may
be the case that not all information is presented or available. There may be no clear
direction in which to proceed and no clear identification of how the correctness of a
solution can be determined.
The difference between well-defined and ill-defined problems calls into question
how valid might be the inferences about individuals’ problem solving capacities if
drawn only from well-defined problems. The long term objective of teaching prob-
lem solving skills would be to equip students with the capacity to draw from a range
of strategies when confronted with ill-defined problems – which latter actually con-
stitute the real-world imperative.
Hesse et al. (2015) describe the nature of problems that might require collabora-
tive activity. The salient feature is that resources will not be equally accessible to all
the problem solvers, so there is a need for multiple solvers. Accessibility refers both
to direct retrieval as well as to human capacity to understand and manipulate the
required artefacts – whether these be objects, knowledge, or processes.
Together, the concerns about whether only well-defined problems can usefully
indicate students’ problem solving capabilities, and the nature of problems that
require collaborative activity, combined within the ATC21S approach to the deliber-
ate design of tasks along a well-defined to ill-defined spectrum. The assessment
tasks were constructed to reflect the characteristics of problems which require col-
laboration. These characteristics are ambiguity, asymmetry, and unique access to
resources with consequent dependence between learners. With such tasks it is pos-
sible to test the construct definition model, the developmental learning progres-
sions, the indicators of increasing competence, and the task development and
delivery. At the most simple level, problem solving tasks were designed to make
collaboration both desirable and essential. In the classroom, this can be achieved by
the teacher giving different sets of information to different students in a group,
rather than giving them all the same information. In order to solve the problem, the
students then need to collaborate in order to access the required resource, in this
case, information. Such an approach mirrors real life collaborative problem solving
situations, where information may be derived from different sources and is not
shared a priori. The dependence between learners that emanates from unique access
88 E. Care et al.
to different resources provides a more authentic prompt for collaborative activity
than mere instructions from a teacher for students to “work together”. Working
together may be valued for its social aspect, yet might not be essential, and can be
regarded by students as counter to their best interests – particularly when they are
functioning in competitive classroom environments.
The tasks in the ATC21S project have many similar characteristics. Each task
was constructed so that students would be able to click, drag and drop objects
using the mouse cursor, with no requirement to use the keyboard. The tasks were
designed for two students to work on and there is a ‘chat box’ for communication
between collaborators, designed to facilitate student communication online
throughout task completion. Each task presents an instruction stem followed by a
problem with tasks ranging from 1 to 8 pages in length. The tasks were designed
to be recognisable at face value as puzzles and to include graphics to attract and
maintain student engagement. A few of the tasks present exactly the same images,
perspectives, instructions and resources to the two students – these are referred to
as symmetrical tasks. Many of the tasks present asymmetrical perspectives, pro-
viding different information and resources to each student, thereby increasing their
need for collaboration. There is encouragement in the tasks for students to discuss
the problem in order to manage the identification of resources, and sharing of
these. The tasks vary in difficulty level; some require less collaboration but are
cognitively more difficult, while others are cognitively easier but require efficient
collaboration to solve. The difficulty of the tasks was varied taking into consider-
ation arguments of Funke (1991) by adjusting several of the parameters, such as
the number of problem states, the constraints on object manipulation built into
each task and described in the problem stem, the complexity of reasoning or
planning required to guide the search, and finally the configuration of objects and
symmetries within the task.
The matter of symmetry poses challenges to assumptions made in education
about equal access for learners. Although there may well be major differences in
education provision across and within countries, the presumption is that in any
classroom all students will have the same access to resources. In this context,
resources refer to tools, texts, teachers, and the classroom environment with all of
these supporting and enhancing the learning of the student. This provision is
extended to equality of access in the assessment situation, with all students again
typically being provided with the same resources. This equality of access has been
contested in the last decade by virtue of emphasis in some learning environments,
on group work. In this scenario, equality of resource is not assured, since different
groups will present with different human resources, and the capacity of the indi-
vidual to act will be determined not only by their access to resources, and their own
capacities, but also by the capacities of others. This reality is reflected in the ATC21S
assessment environment, where students are not provided with the same access to
resources – either those constructed within the assessment environment, or those
that ensue from the varying capacities that student partners bring into play. Both
differential access to resources and the consequent dependence between students
bring about asymmetry in the assessment task activity.
4 Collaborative Problem Solving Tasks 89
Asymmetry raises interesting challenges in the world of assessment, as well as in
how students and their teachers cope with the learning and teaching activity. In this
chapter we demonstrate how both symmetry and asymmetry is manifested in the
assessment environment. Discussion of the consequences of this for scoring is pre-
sented in Adams et al. (2015; Chap. 6).
Content-Free Collaborative Problem Solving Tasks
Two tasks outlined in this section focus on students’ hypothetico-deductive reason-
ing skills in an online collaborative problem solving context. The translation of
these steps into a process that can be generalised and called “collaborative problem
solving” should enable teachers to assess and develop their students’ capacity for
hypothetico-deductive thinking as it manifests itself in collaborative problem solv-
ing behaviour. Hypothetico-deductive thinking begins with a causal question.
Students then generate hypotheses based on observations and data collection. In a
virtual world it is possible to monitor this behaviour through analysis of chat and
action events. These events can be seen to follow a pattern suggested by Griffin
(2014), who argued that problem solving can be understood as a hierarchical series
of steps moving from inductive to deductive thinking. Problem solvers first examine
the problem space to identify its elements. Next they recognise patterns and rela-
tionships between the elements, and formulate these into rules. The rules are then
generalised. When generalisations are tested for alternative outcomes, the problem
solver is said to be testing hypotheses. While inductive reasoning focuses on estab-
lishing a possible explanation to test in the first place, deductive reasoning involves
testing whether the explanation is valid or not. The deductive method attempts to
“deduce” facts by eliminating all possible outcomes that do not fit the available
information. Collaborative problem solving requires the formation of partnerships
in which agreement is reached on the nature of hypotheses to be tested and the man-
ner in which they will be tested.
The two “content-free” tasks described here are compatible with an individual
problem solving approach in that each has a finite solution, and all the informa-
tion required for problem solution is included in the problem space. The transition
to identification of these tasks as collaborative problem solving tasks lies in the
re-structuring of the problem space such that neither member of a pair of collabo-
rating students has access to all necessary information. The first task, Laughing
Clowns, is structured symmetrically – both students have access to all resources;
while the second task, Olive Oil, is structured asymmetrically – each student has
access to different resources. The term “problem space” here refers to the virtual
environment which provides all the stimuli and resources that identify that there
is a problem. The stimuli include text instructions and some explanation about the
problem, as well as virtual artefacts, both static and dynamic, including the
graphic objects on the screens, and the indicators of movement such as mouse
cursor.
90 E. Care et al.
The tasks are hosted on a virtual platform that allows for real-time work activity
by two students operating in a one-to-one computing environment. Students may
work on the tasks on any computers that have internet access and up to date brows-
ers. Technical requirements are outlined by Awwal et al. (2015; Chap. 5). Each task
is described here in terms of the problem solving goals, and the activities or pro-
cesses and artefacts available to the students. The description is followed by an
analysis of the subskills from the conceptual framework that are drawn upon, and
assessed through the task.
Laughing Clowns Task
This task requires students to find patterns, share resources, form rules and reach
conclusions. The two students are presented with a clown machine and 12 balls to
be shared between them. The goal for the students is to determine whether their
clown machines work in the same way. In order to do this, the two students need to
share information and discuss the rules as well as negotiate how many balls they
should each use. The students must place the balls into the clown’s mouth while it
is moving in order to determine the rule governing the direction the balls will go
(Entry = Left, Middle, Right, and Exit = position 1, 2, 3). Each student must then
indicate whether or not they believe the two machines work in the same way (see
Fig. 4.1). Students do not have access to each other’s screen so are not able to deter-
mine the rule governing the other’s clown machine.
Social Skill: Interaction
A fundamental requirement for successful completion of this task is interaction
between partners. Students need to be aware from the start that their 12 allocated
balls are shared and that the most effective way of finding the solution is to allocate
six balls to each such that both students have adequate and equal opportunity to trial
their machine and reach a conclusion. Students who do not interact may begin using
the balls, and even use them all before realising the resources are shared. More
Fig. 4.1 Laughing Clowns task