Part IV
Cultivating Student Competences for the
Digital Smart Society
Chapter 12
Measurement of Computational Thinking
in K-12 Education: The Need for Innovative
Practices
Takam Djambong, Viktor Freiman, Simon Gauvin, Martine Paquet,
and Mario Chiasson
Abstract We are currently living in a period where computational thinking (CT)
will influence everyone in every field of endeavor (Wing, 2006, 2008). While its
definition as well as its place in school curricula are still not clear, the process of
integrating CT within the K-12 school system is underway. In New Brunswick,
Canada, as in other parts of the world, more and more students are being exposed to
a different programming and coding activities that seek to introduce CT skills such
as abstraction, decomposition, algorithmic thinking, as well as pattern recognition
while solving problems in a variety of technology-rich environments. Our 3-year
study of innovative practices targeting the development of CT consists of three main
stages: (1) the research and development of a visual data flow programming lan-
guage for development of CT skills in K-12, (2) the development of a testing method
based on a selection of tasks and its application to measuring CT in middle and high
school students, (3) deeper investigation into the process of CT development in
students, along with the elaboration of a novel testing suite in order to better detect
students’ progress for each of four components of CT skills. Our findings demon-
strate, along with students’ engagement and interest in solving challenging tasks,
the complexity of issues that emerge from this process as well as possible paths for
future investigations.
T. Djambong (*) · V. Freiman · M. Chiasson 193
Faculté des Sciences de l’Éducation, Université de Moncton, Campus de Moncton,
Moncton, NB, Canada
e-mail: [email protected]; [email protected]; [email protected]
S. Gauvin
Agora Mobile Inc., Moncton, NB, Canada
M. Paquet
Anglophone East School District, MENB, Moncton, NB, Canada
e-mail: [email protected]
© Springer International Publishing AG 2018
D. Sampson et al. (eds.), Digital Technologies: Sustainable Innovations for
Improving Teaching and Learning, https://doi.org/10.1007/978-3-319-73417-0_12
194 T. Djambong et al.
12.1 I ntroduction: Increased Attention to Computational
Thinking and the Problem of its Development
and Assessment in K-12
Since publication of two articles by Jeannette Wing [1, 2], the introduction of com-
putational thinking (CT) in K-12 education has been gaining momentum initiating
“coding” programs within the past decade. However, this rapid evolution has been
hampered by several challenges faced by the school teachers and curriculum
designers.
While embracing CT as twenty-first century skill for all, Canada with its provin-
cially governed educational system does not yet have clearly defined role of CT in
the school curricula. Historically, the term “computational thinking” was employed
by Seymour Papert already in the 1980s and 1990s, first in connection to his philo-
sophical views which linked computer programming to more general issues of cul-
ture and education [3]; then he brought it to the discussion of what CT could bring
to the understanding of geometry [4]. Later, Wing [1] defined the term as “a set of
attitudes and skills that are universally applicable, and which everyone should learn,
not just only IT professionals.” Yet, it is not completely clear what are exactly the
components of this set of “universally applicable” attitudes and skills that makes CT
essential competence for all [5], along with the question of how to develop and
measure them.
Several worldwide initiatives aimed at introducing computational thinking into
curricula from Kindergarten to Grade 12 have been reported in the literature [6–8].
These initiatives are currently implemented in the United States,1 Australia,2 the
United Kingdom,3 and many other European countries.4 The following main insights
have been gathered from the studies:
1. Computational thinking is a mental activity that involves the implementation of
a set of mental processes that can not only reduce the ability to write line of
codes or to program a computer in order to perform a specific task. Among the
basic concepts most commonly cited in the literature and used to define and
measure computational thinking, one could mention abstraction, decomposition,
pattern recognition, algorithmic design, and generalization [9–14];
2. Several studies also highlight the need to identify pedagogical approaches, learn-
ing environments, as well as measurement tools and the modalities of assessing
the skills that are beginning to emerge as closely associated with computational
thinking. Without appropriate assessment tools, the integration of CT in school
curricula could be somewhat problematic [15]. On the other hand, given that CT
1 https://k12cs.org/wp-content/uploads/2016/09/K%E2%80%9312-Computer-Science-Framework.
pdf.
2 http://docs.acara.edu.au/resources/Technologies_Sequence_of_achievement.pdf.
3 https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/ComputingAtSchool
CACM.pdf.
4 Netherlands, Lithuania, Norway, Hungary, Greece and Poland.
12 Measurement of Computational Thinking in K-12 Education… 195
is now considered as a fundamental skill that is transferrable from one coding,
programming, or robotics platform to another, there is also a need for assessment
tools that apply across platforms [16].
In Canada, there are a scattered number of initiatives at different school levels in
various provinces. Some schools in the province of Ontario try to integrate compu-
tational thinking in primary mathematics curriculum [17]. Some students from
Grades 7 to 10 can take part in the International Challenge5 on Informatics and
Computational Thinking, organized by the University of Waterloo.6 In Quebec, a
Minecraft environment is used in some primary schools, as recently reported by
Karsenti et al. [18]. Our investigation of rather isolated innovational practices of CT
development in New Brunswick schools aims to add more research-based evidence
to this portray by reporting findings from three pilot studies (called Stage1, Stage 2,
and Stage 3 in this paper).
While responding to the need for more precise tools for assessment of CT, our
chapter attempts to show a complexity of the task of introducing students with some
ideas of coding and programing along with the tasks that would provide with a
deeper insight about how the development of CT can be supported and assessed.
More precisely, we present the preliminary results of a multiyear ongoing inves-
tigation of CT that seeks to provide some structure to the problem of better under-
standing of its nature, of how it can be measured, and of how these results can be
used to improve the selection of curriculum and CT learning outcomes of education
in general. The research team has been created as result of a collaboration between
ICT Competences Network in Atlantic (CompéTICA, Compétences en TIC en
Atlantique7) involved in the identification and measurement of digital literacy on a
lifelong continuum [19] and Agora Mobile Inc. The research team combines exper-
tise in educational research, computer science research, school administration, edu-
cational technology and teaching. The origin of this paper began with a first study
seeking to understand the CT effects of a technology-rich learning environment
(TRE)—Vizwik, developed by one of the authors, designed to teach coding of K-12
students by building mobile apps [20]. Unlike similar learning environments, such
as MIT’s Scratch and App Inventor [21], what is unique about Vizwik is its use of a
particular type of programming language, visual data flow (VDF) [22]. Data flow
has been shown to reduce the complexity of programming [23] and has potential in
aiding the development of CT in students through simplified control flow semantics,
removal of variables and naming, no syntax errors, natural parallelism, and declara-
tive functional style that provides more focus on problem domain than on control
flow syntax of imperative styles, helping students focus on problem solving. In
studying the effects of Vizwik on students in this first study, we realized early on
that the methods and tools at our disposal were insufficient to accurately measure
CT development specifically within Vizwik-based environment and from the more
5 http://www.bebras.org/?q=countries.
6 http://www.cemc.uwaterloo.ca/contests/bcc.html.
7 www.competi.ca.
196 T. Djambong et al.
general perspective of introducing innovative CT-related teaching practices in
schools. This discovery led us to search for tools, and not finding any with sufficient
precision, embark on research by which to create them, and validate in real school
settings.
In the next sections, we discuss the theoretical background, the design of the dif-
ferent stages of the study, and the methodology. We follow by addressing the main
findings of the research. Finally in the last section, we end with discussion and
conclusion on the main insights of the study.
12.2 Theoretical Background
12.2.1 Defining Skills for CT
The educational intervention conducted in the context of our study was to develop
among target students some skills or abilities related to CT. As we said before, dif-
ficulties to define CT are well documented in research. The same seems to be true
regarding the definition of abilities and skills we try to assess. While noticing a
general lack of consensus, we identify four approaches described in the literature.
First, Selby and Woolard [24] mention frequent use of abstraction, and decomposi-
tion, as part of the process of CT. Second, from the computer science (program-
ming) perspective, some fundamental structures are used, such as conditional logic,
parallel thinking, recursive and iterative thinking, flux control, systematic treatment
of information, debugging, and systematic detection of errors [25]. Third, the practi-
cal aspects of the development of CT within STEM movement, such as working out
data, modeling and simulation, computational problem solving, and systemic think-
ing are mentioned [26, 27]. Fourth, cognitive transversal abilities are often men-
tioned being associated to CT.
Moreover, a study8 conducted by ISTE9 and CSTA10 [28] has identified 11 con-
cepts which include: abstraction (identifying and extracting relevant information to
define main idea(s)), algorithm design (creating an ordered series of instructions for
solving similar problems or for doing a task), automation (having computers or
machines do repetitive tasks), data representation (depicting and organizing data in
appropriate graphs, charts, words, or images), data collection (gathering informa-
tion), data analysis (making sense of data by finding patterns or developing insights),
decomposition (breaking down data, processes, or problems into smaller, manage-
able parts), parallelization (simultaneous processing of smaller tasks from a larger
task to more efficiently reach a common goal), pattern generalization (creating
models, rules, principles, or theories of observed patterns to test predicted outcomes),
8 http://www.iste.org/docs/Standards-Resources/iste-standards_students-2016_one-sheet_final.
pdf.
9 International Society for Technology in Education.
10 Computer Science Teacher Association.
12 Measurement of Computational Thinking in K-12 Education… 197
pattern recognition (observing patterns, trends, and regularities in data), and finally
simulation (developing a model to imitate real-world processes).
When selecting tasks for our study, we adopted a view of CT as particular way of
approaching problems and analyzing different situations which is grounded in the
four basic abilities commonly mentioned in the literature [11, 12, 29] as being
directly associated with CT concepts. These abilities are: (a) build mental models of
a problem and its solution (abstraction-AB), (b) develop a solution as collection of
tasks to be performed as a series of steps (algorithmic thinking-AT), (c) assess the
opportunity after assessing the complexity of a given problem, whether to break it
down into several subproblems (decomposition-DE), and (d) be able to link a spe-
cific problem to other problems of the same type that has already been solved (pat-
tern recognition-PR). These specific abilities were chosen due to the fact that they
can be viewed as cognitive process or skills surrounding the development of CT
skills through the manipulation of visual, tangible, and data flow programming
learning environments.
12.2.2 A ssessment of Curriculum for CT Development
Educational practices that seek to develop CT in school context are often described
through the lenses of the constructionist theory of learning which is grounded in the
work of Papert [30]. According to this view, the construction of knowledge happens
through the manipulation of the technology-rich environments which is anchored
into the construction of symbolic or tangible artifacts and provides a sound articula-
tion between concrete and abstract thinking. Learning is then being complexified by
interaction of personal, social, cultural, and tangible dimensions [31]. Personal
component is characterized by a strong engagement of students who allows them
to explore the affordances of the TRE, such as Scratch, Lego Mindstorm, Vizwik,
among others, to create games or other types of applications while making their
learning visible in attaining goals they fix to themselves. Social dimension implies
the fact that knowledge constructed by students becomes a social capital which can
be shared with other students, both in conceptual and tangible forms while helping
them for build a real community of practice [31]. Cultural dimension determines
the way of thinking (abstract or formal versus procedural or concrete) is the most
valuable in a particular moment or context. Finally, the tangible dimension allows
to appreciate the result of students’ work which is not only a product of their imagi-
nation but also something real that takes the form of codes, programs, diverse appli-
cations that students construct in the TRE; all this provides them with a specific and
meaningful learning context [32]. As essential result of this approach, from the con-
structionist perspective, is that it enables the student to develop abilities to think, to
create, to discover, and to innovate, while interacting with computer artifacts, by
means of appropriate pedagogical guidance. Our previous study of the robotics-
based learning [33] shows that this kind of innovative pedagogy create conditions
where the tools is gradually becoming cognitive tools thus enriching student’s learn-
ing experience [34].
198 T. Djambong et al.
12.2.3 M easurement of CT
In this complex context of learning, the question of assessment of CT needs to be
handled with much care. Several studies conducted in the last decade based on
Wing’s definition present a variety of models considering different concepts. Within
this second axis of theoretical perspective, we aim to coordinate the process of stu-
dents’ learning with assessment of students’ performance while focusing on the
investigation of the capacity of available tools to discriminate some constructs of
CT we found in the literature, yet quite limited at this point. This includes according
to Roman-Gonzalez, Moreno-Leon & Robles [15] classification:
• Tools of summative assessment of CT: (a) aptitudes-based Computational
Thinking Test [35], the Test For Measuring Basic Programming Abilities [36], or
the Commutative Assessment Test [37]; (b) cognitive abilities-based [38, 39].
• Tools targeting ability to transfer CT skills into the real-life context. Into this
perspective, from our point of view, falls tasks of the International Bebras com-
petitions [40, 41] as well as the Computational Thinking Pattern Quizz [42].
• Tools for formative assessment of CT which aim to provide students with an
immediate automatic feedback helping them to improve their abilities. We put in
this category Dr. Scratch [43] and the Computational thinking pattern graph [44].
Tools for measurement of perceptions and attitudes towards CT. We mention
here the Computational Thinking Scales (CTS) developed by Korkmaz, Çakir,
and Özden [45].
• Tools related to the assessment of the vocabulary and communication abilities
associated with CT which could help measuring students’ verbal skills when
doing coding tasks [46].
Despite the existence of different tests aiming to measure the development of CT
in students, there are two important issues that need to be addressed: first, many
tests are still in their initial phases of psychometric validation and second, many
researchers [25, 47, 48] seem to agree that no single instrument would be able to
capture all complexity of different dimensions of CT. Thus, they suggest using a
combination of several measurement tools for more objective assessment.
12.3 Methodology and Design
12.3.1 Methodology: Design-Based Research Approach
It is becoming a common trend to operationalize evidence-based approach and its
underlying design-based research (DBR) methodology that blends empirical educa-
tional research with theory-driven design [49]. Referring to the fundamental work
of the Design-Based Research Collective [50], the authors identify five characteris-
tics of the DBR that frame implementation and evaluation of innovative educational
12 Measurement of Computational Thinking in K-12 Education… 199
concepts based on the analysis of what occurs, rather than working from hypothe-
ses, thus helping to deal with “consistently unreliable predictions about the impact
of technology in education” ([49], p. 364).
Within the DBR paradigm, in a previous study of a virtual community, we identi-
fied three basic components of its dynamic developmental structure: needs analysis,
pedagogical and technological background, and research feedback ([51], p. 24). The
authors of the DBR model [50] argue that this innovative research approach is suit-
able for studying complex problems in real authentic contexts in collaboration with
practitioners. Integrating known and hypothetical design principles and conducting
rigorous and reflective inquiry to test and refine innovative learning environments,
the DBR-based research must account for how designs function in authentic set-
tings using methods that can document and connect processes of enactment to out-
comes of interest [50]. This model is suitable to address challenges that face the
innovative approaches targeting the development of CT in K-12 school system. So
far, we succeeded to conduct three cycles of the research, each corresponding to one
of three stages described above.
According to the objectives of each cycle, we formulated following research
questions:
Stage 1:
• RQ1: What lessons do we learn from first field observations of the Vizwik testing
in the classroom?
• RQ2: Does visual data flow programming stimulate the development of CT
skills in students?
Stage 2:
• RQ3: How the scores reflect the estimated level of difficulty of each task and the
presence of CT elements (or their combination) in those tasks during pretest and
posttest?
• RQ4: What are students’ and teacher’s perceptions of the experience?
Stage 3:
• RQ5: Can we measure the individual CT skills developed by students?
12.3.2 D esign of the Study and Stages
The study has consisted so far of three main stages. The first stage was dedicated to
the researching the development of CT through the use of a novel data flow com-
puter programming language for education, Viwik. By introducing students to this
type of programming, we hoped to observe and understand how it would promote
the development of CT in the context of a STEM classroom. Making observations,
conducting interviews, and analyzing student work, we concluded that our methods
and tools were insufficient to determine a direct correlation of student exposure to
200 T. Djambong et al.
data flow programming and their CT skills development. Before we could deter-
mine if data flow was resulting in CT, we needed to be able to measure CT develop-
ment, thus the focus of the following stage realized in collaboration with
CompéTICA Network.
In stage 2, we first began an extensive search for CT measurement tools. While
several seemed promising, and while more were still in development, we deter-
mined that they were either not yet mature or not applicable to our context of data
flow programing (Stage 1). Consequently, we shifted our research focus to the cre-
ation of a new testing method. Following a literature review that allowed us to
enrich our theoretical background, we started building up a questionnaire through
the selection of specific Bebras tasks [29] along with some original ones designed
by our team. Each task targeted one or more properties of CT: Abstraction (AB),
Decomposition (DE), Algorithmic thinking (AL), and pattern recognition (PR). As
a result of the pilot implementation of the tasks (as pre- and posttest) in one school
which integrated Vizwik programming tasks along with other technology-rich envi-
ronments (Scratch, Makey Makey, and LEGO Robotics Kit) within a broad-based
curriculum (Grades 6 and 9). While indicating some changes in students’ scores,
the results showed rather limited possibilities of the tool to discriminate each com-
putational thinking cognitive skill related to CT development. Moreover, we were
unable to relate the changes to specific programming and coding tasks that students
tried to realize.
This is why, during the Stage 3, we shifted our focus on deeper investigation of
the process of the development of CT. We began by developing new questions for
our pretests and posttests that would address the issue of discriminating components
of CT and better connect them to learning that happened in the classroom.
Our approach was to develop questions that were visual representations of the
each of the CT properties we sought to measure. For example, if abstraction is the
cognitive ability to focus on the large without all the detail in place, then showing
abstract images of a concrete object, and matching the correct one, would therefore
indicate a level of CT abstraction at work. A novel testing suite was aimed not only
to the measurement of isolated CT properties and not only related to data flow pro-
gramming, but targeting CT development in a more general way. We will now
describe in more detail the procedure of each stage.
12.3.3 Stage 1 Procedure
Initiated by Agora Mobile Inc., the first stage (2014–2016) consisted in introducing
Vizwik to students in two high school STEM classes in New Brunswick, Canada.
The participants were students from two Grade 10 classrooms: classroom 1
(n = 29, 17 males, 12 females) and Classroom 2 (n = 26, 16 males, 10 females)
respectively from whom we obtained parental consent, along with two teachers.
Students worked on Vizwik for the entire 60 min class period 5 times per week for
4 weeks. Each classroom was equipped with a LAN network allowing students to
12 Measurement of Computational Thinking in K-12 Education… 201
sign up to create an account and connect through desktop computers running the
Chrome browser.
At the beginning of the pilot experiment, students were presented a 45 min dem-
onstration on Vizwik by the teacher. In the second class, students were presented
with tutorials built into Vizwik and asked to create their own mobile app. In the
second class, students were presented with the same tutorials but asked to build an
interactive game from a step-by-step built-in lesson. The tutorials consisted of a
series of 9 short lessons built into Vizwik covering the basics of the user interface
(UI), using the data flow script editor, creating UI components, and attaching scripts
to UI components to handle user events. Students were also given access to 300+
existing apps of varying complexity in which they could open, browse, and learn
the techniques utilized. The students also had access to 100+ “How To” modules
that explained how to add a particular functionality to an app, such as using GPS,
taking pictures, saving data, and computer science theory for conditionals, loops,
functions, and recursion. While Vizwik also incorporates a social network for proj-
ect sharing, for this test each student, having their own account in a private class-
room, had no means to copy and share their work with other students. Once the
preliminary learning phase was complete, students were left to work individually
on their project.
This was followed by the first class creating group projects (10) that required the
students to design, build, and code their own game app and the second following the
steps of the lesson. During the project, students were able to discuss between them-
selves and ask the teacher questions regarding Vizwik and their project. During the
class periods, our team was on-site to observe the reactions, attitudes, and learning
of the students. During testing, student activity was tracked within the Vizwik plat-
form to record quantitative data to the cloud for login times, duration of sessions,
and activity within projects, lessons, and the user interface.
At the end of the project period, each student presented their app using an over-
head projector to the class explaining its functionality and answering questions and
comments on its design from other students as a means for the teacher to assess the
level of learning. Each app was then rated by the teacher on originality, usability,
complexity, and use of coding. Once all projects were complete, the students and
teachers were interviewed to collect qualitative data.
12.3.4 S tage 2 Procedure
In the second stage (2015–2016), we conducted a case study within a CompéTICA
Network initiative [19] to observe the development of CT during programming and
coding activities among students in school (Grade 6–9) within Broad-Based
Technology Education Curriculum (BBTE) in New Brunswick, Canada. This case
study was conducted during 2015–2016 school year in three steps. We began meet-
ing with our partners in October to set up a research procedure. The first step con-
sisted in the literature review on computational thinking, its definition, development,
202 T. Djambong et al.
and assessment (November—January). Then, in the second step, we started build-
ing up a questionnaire while making the first selection of tasks and their validation
by teachers, students, and research team (February–April). 19 Bebras multiple-
choice tasks selected from available online materials were combined with 11 tasks
designed by team researchers. The main selection criteria were:
• The presence of at least one well-expressed CT concept in the task.
• The focus on algorithmic thinking because intervention activities were based on
programming and coding through visual or tangible environments.
After this validation phase by one classroom teacher and her students from Grade
10 and the research team, 14 Bebras tasks and 9 tasks designed by the research team
were finally selected to form the final 23 paper–pencil tasks-based assessment test.
Finally, we conducted our study in school over a 6-week period (May–June).
Students were also equipped with the Vizwik, Scratch, and LEGO robotics tools.
Each classroom was equipped with Wi-Fi network access allowing students to con-
nect their personal devices for the completion of their projects.
The participants were students from two groups (Grades 6 and 9) for a total of 24
students—15 females and 9 males, for whom we obtained parental consent. Each
group has completed a variety of programming activities over a period of 5 weeks.
Prior to this, each group was asked to do a pretest (paper and pencil format). During
the study, Grade 6 students (n = 10) were working mainly with the Scratch program-
ming environment on a weekly basis (on hour per week) whereas Grade 9 students
were doing activities with LEGO Robotics Kit and Vizwik on a daily basis (1 h per
day). The choice of the software and methods of presenting it to the students, as well
as tasks they had to accomplish, was made by the teacher (one teacher was working
with both groups). Once all projects were complete, the students were tested again
and interviewed to collect qualitative data.
12.3.5 S tage 3 Procedure
In the third stage of our study (2016–2017), we conducted a second case study to
measure the development of individual aspects of CT during programming and cod-
ing activities among students in three First Nations schools (Grade 4–8) on the
invitation of the Three Nations Education Group11 in the province of New Brunswick
Canada. Each classroom was equipped with a LAN network allowing students to
connect to the Internet through desktop computers running the Chrome browser.
The study was distributed between three schools of Grade 4 (10, 3 females, 7 males)
in school 1, Grade 5–6 (39, 13 females, 26 males) in school 2, and Grades 7–8 (45,
25 females, 20 males) in school 3, for a total of 104 students who were all provided
parental consent. Each school provided a LAN networked lab of desktop computers
connected to the Internet through the Chrome browser.
11 www.fneg.org.
12 Measurement of Computational Thinking in K-12 Education… 203
The study followed the same outline as stage 2 with an introduction, pretest, cur-
riculum delivery, posttest, and interview phases. The introduction consisted of our
team delivering the outline of the study to both students and teachers. The pre- and
posttests were administered as multiple choice questions. Each question was not
specific to coding, except for the algorithm questions that used English-styled con-
ditional questions. The questions covered the four aspects of CT of stage 2 with an
added question on data flow (DF)to investigate if students without any prior knowl-
edge of data flow would be able to parse and comprehend the diagrams The tests
comprised of a 23 multiple choice questions given in random order over a 1 h period
and performed online with a computer. The test results were collected by computer
and stored online. The responses received were 10 from Grade 4 in school 1, 39
from Grades 5 and 6 in school 2, and 45 from Grades 6, 7, and 8 in school 3.
In this study, the delivery phase of the learning material between the pre- and
posttesting, unlike the previous stage, was under our control. Students were exposed
to the curriculum 2–3 times per week for 50 min classes. The procedure developed
for this phase was to deliver a series of computational thinking challenges with a
gradual increases in complexity. The first week began with non-technological
“unplugged” challenges students were exposed to computer science theory of binary
counting and sorting algorithms. This was followed by 4 weeks of students working
through the self-directed curriculum of CodeStudio.org. As an intermediary step to
robotics, students were challenged with tactile learning through electronic projects
using LittleBits circuit construction kit. In the final phase, students were challenged
with robotics projects, Grade 4 with Dash and Dot, Grades 5–7 with mBot, each
using iPads and Wonder Workshop and mBot Coder applications, respectively, over
Bluetooth and in teams of 2–4 students per robot. Our team was present on-site once
a week to assist the teachers, answer student questions, and make observations of
progress and learning. Once this phase was completed, the students were tested
again and interviewed to collect qualitative data.
12.4 Study Findings
In this section, we present results for each stage of the research, according to each
research question.
12.4.1 Stage 1: Development of CT with Data Flow
RQ1: What lessons do we learn from first field observations of the Vizwik testing in
the classroom?
The following is a summary of first observations, recorded as field notes, during
the test of both high school classes by our on-site team:
204 T. Djambong et al.
Table 12.1 Sample quotes from teacher interviews
Descriptors’ codes Sample quotes
Coding as easy task
Coding as difficult “We should have been coding a long time ago, starting in grade 1”
task
“I find coding very hard, I think I would need more training, but I’m
Engagement willing to do this given the reaction of the students and how fast they have
Attention picked it up”
Big success in
technology class “The students are more engaged with this than any other subject”
“I’ve never seen my students so quiet and sitting still for an entire class”
“Even children with attention problems are excelling at coding, it seems
to be a place they can focus their attention and be creative”
Table 12.2 Sample quotes from student interviews
Descriptors’ codes Sample quotes
Coding as easy task
Coding as difficult task “I really like coding, it’s fun”
“Coding is hard at first, but then after you do some more it gets
Engagement easier...”
Attention “I like the challenges and getting the answers, it is fun...”
Big success in technology “Working with robots is fun, you can see your code work”
class “I want to do this next year”
1 . We notice some similarity in the structure of the data flow language within
Vizwik with the diagrams used to teach mathematical expressions and equations,
which made Vizwik coding familiar to students who had passed through this
stage of mathematical curriculum.
2 . Students were able to work effectively in small groups of three or four around a
single desktop computer, each of them being involved with equal levels of
engagement and attention (see Tables 12.1 and 12.2) to learn and often partici-
pating in collaborative discussions while designing their applications.
3 . We noticed that some students who usually have difficulties to stay concentrated
could keep their focus on the Vizwik tasks over a long period of time.
4 . Students were able to complete the development of a complex game (consisting
of animation, timers, conditionals, and data storage), but were unable to cor-
rectly answer questions about how the code worked.
5 . Teachers with basic scientific backgrounds, or some simple coding experience,
were able to learn Vizwik and follow up their students’ work by creating their
own activities.
During and after the study, teachers were interviewed with questions regarding
their observations about coding and their students. The following quotes were cap-
tured during these interviews:
The overall response of the teachers was very positive for the changes they saw
in their students as a result of exposure to coding activities. The students were also
interviewed with similar questions regarding their experience with coding whose
results captured in Table 12.2.
12 Measurement of Computational Thinking in K-12 Education… 205
Again, we can see from the responses of the students, the overall experience was
a positive one. Of interest is the response to coding as a difficult task indicating that
students are aware that coding can be hard, but that they can also succeed if they
persevere. The complexity of coding, or more precisely debugging, may indeed be
a catalyst for the development of patience and perseverance, a quality highly
researched and associated with success in later life.
Some challenges we noticed during these generally positive experiences need
further investigation. Students who already had some experience in visual coding
found Vizwik easy to learn, while others that had experience in textual languages
found data flow challenging. However, the students with no, or little, experience in
coding were able to become productive quickly. Students that lacked analytical
skills could find learning Vizwik challenging. Some more advanced students were
able to move ahead more quickly and at higher level of complexity.
RQ2: Does visual data flow programming stimulate the development of CT skills in
students?
Following the interviews, each of the 10 group projects for class 1 were analyzed.
The project data generated by the student’s use of Vizwik was stored in a database.
The structure and content (scripts, operations, UI components) of the project is a
reflection of the student’s understanding of how to use Vizwik, coding, and concepts
related to CT development. The complexity and density of the use of coding ele-
ments is also an indicator of learning comprehension and sophistication of the app
being created. The number is scripts is counted to show the complexity of the proj-
ect. The number of operations per script shows the complexity of an individual
script. Connectivity of the data flow graph, or the number of data links connecting
operations, shows the complexity of a script. The number of different operation
libraries shows the breadth of the concepts used. The number of c onditions, loops,
variable, and function calls shows the complexity of the algorithms in the code. The
number of event handlers shows the complexity of the user interaction supported.
The number of screens shows the complexity of navigation of the app while the
number of UI components shows the complexity of the user interface. Each of the
data points were collected for all metrics of all projects as shown in Table 12.3.
Table 12.3 Analysis of students’ projects
Metric Group 3 45 6789 10
# of scripts 12 6 12 8 37
# of operations per 32 7 17 6 16 7
script 23 10 4 8
Connectivity of graph 3 8 15 8 7
# of Libraries 45 5
# of flow controls 22 2 16 12 11 25 10 12 16
# event handlers 02 3
32 6 7 4 6 6 10 3 9
# screens 82 4
# UI components 33 4359433
10 4 15 4 12 5 25
10 5 10 5 11 5 20
6 4 4 6 12 5 3
206 T. Djambong et al.
Chart 12.1 Comprehension of Vizwik techniques
Each value in the table is ranked based on the level of comprehension the project
team has of the technique being measured where low (red), medium (yellow), and
high (green) values correlate to comprehension level. We can graph these results to
show the comprehension levels of each technique shown below Chart 12.1.
The chart shows that for all metrics over 50% of the students attained a high
(green) level of comprehension. This result would mean that the majority of stu-
dents developed the correct mental model of the Model-View-Controller pattern
used as the basis of the Vizwik development model. Comparing individual metrics,
we can see that simple concepts like a screen was understood by nearly all students,
and as would be expected, more complex and abstract concepts like event handlers
and flow controls were understood by much fewer students. However, our assump-
tion that higher levels of use of a technique correlated with better understanding was
found to be flawed. A few projects contained 20 plus scripts indicating a highly
complex app. After visual inspection, it was seen that these students had copied the
same scripts many times over, only to change a static value for each. This illustrates
that the students did not understand the use of dynamic variables, but relied on pat-
tern recognition, to solve a coding challenge. As a result, high values for the tech-
niques do not directly correlate to CT depth of understanding. This was further
confirmed by visual inspection of projects with high levels of flow controls and
graph connectivity which showed the inverse, e.g., a lack of understanding which
resulted in poorly designed and complex code.
In the second class, observations of students creating the same game with the
step-by-step lesson were not able to correctly answer questions about their code. We
suspect this result indicates that the step-by-step lessons were being completed by
the students without learning of CT concepts, but rather, they were simply copying
the code’s structure from images in the lesson steps.
The observations made at this stage have highlighted the need to measure or
capture the skills actually used by students in relation to computational thinking
12 Measurement of Computational Thinking in K-12 Education… 207
Table 12.4 Average scores of solving test questions by level of difficulty
Level of question difficulty Grade-9 Average score Grade-6 Average score Grade-9
Grade-6 Pre (%) Post (%) Pre (%) Post (%)
Easy (A) 2a 9a 39.4 55.0 46.7 48.4
8a 25.0 28.7 44.2 41.1
Medium (B) 8a 6a 26.9 32.3 36.7 34.5
Hard (C) 13a
aNumber of questions by level of difficulty in each grade
developed through the manipulation of data flow programming. Our analysis of
student projects has indicated that some CT had occurred, but no direct correlation
between the metrics and CT properties was found. This necessitated the implemen-
tation of Stage 2 of this study.
12.4.2 Stage 2: Measurement of CT with Bebras Tasks
RQ3: How the scores reflect the estimated level of difficulty of each task and the
presence of CT elements (or their combination) in those tasks during pretest and
posttest?
When we review the results of the test according to difficulty level of the ques-
tions, we found that the easiest way the Bebras team looks at the difficulty of a task,
was simply by looking at the average scores obtained by students for each task. A
task was then considered as difficult if during the Bebras challenge, it was solved by
a small percentage of students [52]. We note that the difficulty level was not equally
distributed for Grade-9 and Grade-6 (Table 12.4).
We observed that, for Grade-6, the highest score was obtained for the Easy ques-
tions at 39.4 percent for the pretest and 55% for the posttest. This represents the
highest increase from pre- to posttest for all grade levels at 16%. For Grade-6
Medium and Hard levels, the score differences are significantly lower at 4% and
5%, respectively. We also note, however, that the Hard level questions were solved
slightly better than those for Medium. For Grade-9, the highest score was also
obtained at the Easy level, although the difference between pre- and posttest is
smaller (1%). At the Medium and Hard level questions, there is a slight decrease in
scores (3% and 3%, respectively). We can just see that the average scores obtained
by students in sixth Grade and ninth Grade at both pretest and posttest do not allow
to obtain an objective picture of the evolution of students’ mental representations
and schemes in task problem-solving process during the test and during the learning
activities in class between two tests. This finding could justify the need to develop
measurement tools that have not only the characteristics of a classical psychometric
test but are also able to allow the capture of changes in implementation of thinking
processes through a problem-solving activity for the development of cognitive and
noncognitive skills related to computational thinking. More explicit explanation of
the above results might give rise to further investigation (Table 12.5).
208 T. Djambong et al.
Table 12.5 Average scores of solving test questions by type of computational thinking skills
Type of CT skill involved (a) Average score Grade-6 Average score Grade-9
Pretest (%) Posttest (%) Pretest (%) Posttest (%)
ABb (1) 20.0 50.0 86.7 71.4
ALc (3) 40.0 36.7 55.6 42.9
PRd (1) 90.0 100 73.3 85.7
AB + DEe (1) 60.0 80.0 86.7 71.4
AB + PR (4) 25.0 20.0 23.3 21.4
AL + DE (1) 60.0 70.0 93.3 71.4
AL + PR (2) 10.0 10.0 13.3 25.0
AB + AL + DE (5) 14.0 24.0 38.7 27.2
AB + DE + PR (1) 10.0 30.0 60.0 71.4
AL + DE + PR (1) 40.0 10.0 20.0 42.9
AB + AL + DE + PR (3) 33.3 30.0 44.4 52.4
aNumber of tasks for each type
bAbstraction
cAlgorithmic thinking
dPattern recognition
eDecomposition
The results obtained seem to be consistent with previous studies [41, 52] under-
lying the fact that the estimation of task difficulty based on the presence of CT ele-
ments in the task alone is effective for only some categories of tasks. There are
probably other factors that can explain the differences in score observed according
to the estimated level of task difficulty as well as for the CT elements present in
those tasks. Among those factors, we can mentioned, the possible prior exposure of
students to CT concepts outside of school context, the level of motivation and
engagement of students towards the proposed tasks probably due to the fact that the
experimentation took place during the last few weeks of the school year, a factor
that could have had a negative effect on student motivation and commitment.
RQ4: What are students’ and teacher’s perceptions of the experience?
From analysis of the interviews, the general perception of the teacher was that
the use of technology and coding activities was a positive learning experience for
the students as it concerns with the development of problem-solving skills.
Table 12.6 shows examples of quotations illustrating different aspects of students’
learning, including the development of problem-solving ability, ICT literacy, pro-
gramming or coding skills, creativity, motivation, and collaboration. These skills
were measured through teachers’ and students’ perceptions interviews.
In addition, the teacher noticed (similarly to observations at the stage 1) that
students could demonstrate qualities which described them as (1) knowledge con-
structor and innovative designers (students demonstrate creative thinking, construct
knowledge, and develop innovative products and processes using technology); (2)
creative communicators and global collaborators (students use digital media and
environments to communicate and work collaboratively, including online or dis-
tance collaboration, to support individual learning and contribute to the learning of
12 Measurement of Computational Thinking in K-12 Education… 209
Table 12.6 Teacher descriptions of students’ behavioral pattern and sample quotes
Keywords Sample quotes
Problem-solving ability “They could not solve the problem, but they spent the entire period
of the activity trying different things, watching the tutorials online,
reading the manuals to try and figure out how to do that
programming”
ICT literacy “They found most of their information online. So they used ICT to
help them operate ICTs. That’s what was interesting”
Programming or coding “What I could see is that they had different ways of programming”
skills
Creativity “We can really see the creativity of young people in relation to these
games”
Motivation Positive: “She was motivated to make other applications like Apps
store in general”
Negative: “One could observe that some were dropping out, and that
they were going to play online games that had nothing to do with
what was asked…”
Cooperation “One of the things I’ve noticed is the cooperation ... so it’s the fact
that the students ask for help from their classmates instead of asking
the teacher ...”
Knowledge constructor “Students know and use a deliberate design process for generating
and innovative designer ideas, testing theories, creating innovative artifacts or solving
authentic problems”
Creative communicator “When a student was encountering a bug or a problem, another
and global collaborator student from another group was helping. Most of the time, they were
working in groups of 2, 3 or 4. Students use collaborative
technologies to work with others, including peers, experts or
community members, to examine issues and problems from multiple
viewpoints”
Computational thinker— “Students break problems into component parts, extract key
Critical thinking, problem information, and develop descriptive models to understand complex
solving, and decision systems or facilitate problem-solving”
making
others); (3) computational thinker (students use critical thinking skills to plan and
conduct research, manage projects, solve problems, and make informed decisions
using appropriate digital tools and resources).
The general perception of the students is also a positive one. They describe their
experience as one which helped them to learn valuable skills for coding (see
Table 12.7). Some of the coding activities were described by the students as having
been very successful and led to a great satisfaction among them. Other activities were
more difficult when poorly explained and led to some frustration among students.
After all learning activities were completed, a questionnaire, to elucidate poten-
tial learning outcomes (Table 12.8), was submitted to the students. Our analysis of
the responses shows that a large majority, 62%–75% of the students, believed that:
(a) they were able to share their ideas when working in a team; (b) they feel confi-
dent they know how to do coding with the tools available in their lab; and (c) coding-
robotics-based activities allowed them to improve their ability to learn.
210 T. Djambong et al.
Table 12.7 Descriptors of coding tasks’ characteristics and sample quotes
Descriptors’ codes Sample quotes
Coding as easy task “Most questions were easy to understand, because then the instructions
were clear”
Coding as difficult “Difficulties with some questions because they weren’t well explained.
task For example, for one the questions said you would pour water from
bucket 3 into 4…”
Robotics as favorite “Robots, app because they’re fun, interesting, hands on…”
activity
Scratch as favorite “Scratch because it’s fun to learn coding, I like it, you can do what you
activity want with it, I feel accomplished when my game works”
Visual coding as next “Making a game in scratch”
challenge
Textual coding task as “Mastering new coding skills, build more complex robots (thinking
next challenge robots, stair climber, tank, etc.)”
Big success in “Learning how to code… robots, app making”
technology class
Table 12.8 Questionnaire (n = 24)
Questions Disagree or Neutral Agree or
strongly disagree (%) strongly agree
(%) 4 (%)
13 49
Q1: I regularly participate in coding-robotics- 47 58
based activities 8
21 71
Q2: I can manage alone when performing 29 17 46
coding-robotics-based tasks in the technology 33 62
lab 25 38
8 54
Q3: I share my ideas and my work with others. 21 21 58
8 58
Q4: I am confident with the technology tools in 33 21 75
the lab 58
13
Q5: I know how to do programming (coding) 21 42
with the tools available in the lab
Q6: I know how to work with robotics-based 29
tools
Q7: I know how to find information I need to 21
succeed in my oding—Robotics-based activities
Q8: I know how to solve some particular 34
problem in the lab
Q9: I know how to present and to communicate 21
the results of my work in the lab
Q10:Coding-robotics-based activities allow me 17
to improve my learning
Q11: Coding-robotics-based activities are 21
useful for my future success in school and
further in my life
Q12: I want to make a career in science, 45
technology, engineering, or mathematics
12 Measurement of Computational Thinking in K-12 Education… 211
Table 12.9 Pretest school 1 Grade 4 (n = 10)
Grade 4 AB (%) DE (%) PR (%) AL (%) DF (%)
Correct 80.00 35.00 40.00 46.00 25.00
Wrong 20.00 50.00 46.67 32.00 37.50
Don’t know 0.00 15.00 13.33 22.00 37.50
A majority of students (54%–58%) believed that (a) they are able to work alone
when performing coding-robotics-based tasks; (b) they know how to find informa-
tion they need to succeed in their coding-robotics-based activity; (c) they know how
to solve particular problems; (d) they know how to present and to communicate the
results of their work; and (e) they believed that coding-robotics-based activities are
useful for their future success in school and in life.
Less than half of students (a) regularly participate in coding robotics-based activ-
ities (49%), (b) were more confident with technology tools in the labs (46%), and
(c) were interested in professional career related to science, technology, engineer-
ing, or mathematics (STEM) disciplines (42%). Only 38% of students have declared
that they knew how to work with robotics-based tools while a significant proportion
of students.
Based on the results of Stage 2, we found that the Bebras questions [53, 54],
while reflecting some CT skills, did not allow us to isolate and measure each indi-
vidual skill. Thus, we began to investigate other means of measurement.
12.4.3 S tage 3: Measurement of Specific CT Skills
RQ5: How can we measure the individual CT skills developed by students?
We recall that our research question was formulated in search of how to assess
students’ ability to identify changes in specific computational thinking skills of stu-
dents as a result of exposure to coding activities. We begin by presenting the results
of the pretests for the three schools tested starting with School 2 for Grades 5–6.
Each question of the test would be evaluated according to three categories of
answers: Correct, Incorrect, or “Don’t Know.” No question was left unanswered.
The age and reading level of the students was taken into account in the design of
these tests such that we expected a significant failure rate to occur in order to estab-
lish a baseline and record a differential result in the posttests. However, upon review
of the results of School 1, (ex. Table 12.9), we realized that the failure rate was
perhaps too high. Further investigation into the results showed that some students
were randomly answering questions and that some questions were simply too hard,
thus skewing the results. As a result, we proceeded to remove these anomalies to
produce clean data from which we generated our observations and which are pre-
sented in Tables 12.9, 12.10, and 12.11.
The results showed that students across all ages had difficulty in the areas of
abstraction, decomposition, and pattern recognition. Students in higher grades also
had difficulties in algorithmic thinking while lower grades fared better. Surprisingly
212 T. Djambong et al.
Table 12.10 Pretest school 2 Grade 5–6 (n = 39)
Correct AB (%) DE (%) PR (%) AL (%) DF (%)
Wrong
Don’t know 64.10 42.31 50.00 49.23 36.54
30.77 47.44 38.46 40.00 33.97
5.13 10.26 11.54 10.77 29.49
Table 12.11 Pretest school 3 Grade 6–7 (n = 45)
AB (%) DE (%) PR (%) AL (%) DF (%)
Correct 50.37 44.44 46.67 42.22 40.00
Wrong 39.26 52.59 46.67 43.56 37.22
Don’t know 10.37 2.96 6.67 14.22 22.78
Chart 12.2 Average pretest scores
to our study, the majority of higher grades were able to recognize and parse data
flow diagrams and answer correctly the majority of the time, but this was not seen
in the lower grades (Chart 12.2).
Students typically did not do well on the tests as is shown by the low scores of
correct answers at 35% and 46% (corrected), which was expected, since they have
no training in computational thinking. However, many students had difficulty with
the questions regarding data flow as this had never been presented to them before.
The point of these questions was to determine if the students would be able to deci-
pher the syntax of data flow diagrams on their own without any training. In many
cases, students were able to do this, but this was primarily for students that scored
high on the test, so overall it reduced the score. If we remove the data flow ques-
tions, we see there is a slight improvement overall in the corrected totals to 49%.
The post-evaluation consisted of the same pretest with each of the 23 questions
changed slightly to generate a different answer in order to avoid any memory effects
12 Measurement of Computational Thinking in K-12 Education… 213
from first exposure, but without changing any of the cognitive properties or diffi-
culty being measured. The test was administered 1 week after all curriculum was
delivered. The test was delivered using the same software-based approach using a
web form and all answers were stored digitally (Tables 12.12, 12.13, and 12.14).
The cumulative post results, Chart 12.3, show a portrait of the increased perfor-
mance of the students indicated by higher Correct and lower Don’t Know percent-
ages. Abstraction increased in some grades while decreased in others while
Table 12.12 Posttest school 1 Grade 4 (n = 7)
AB (%) DE (%) PR (%) AL (%) DF (%)
57.14 42.86
Correct 50.00 42.86 57.14 23.81 50.00
Wrong 39.29 50.00 40.00 19.05 7.14
Don’t know 10.71 7.14 2.86
AL (%) DF (%)
Table 12.13 Posttest school 2 Grade 5–6 (n = 37) 34.05 31.08
54.05 46.62
Correct AB (%) DE (%) PR (%) 11.89 22.30
Wrong
Don’t know 49.32 20.27 40.09 AL (%) DF (%)
39.86 77.70 50.00 44.14 29.73
10.81 2.03 9.91 37.84 45.95
18.02 24.32
Table 12.14 Posttest school 3 Grade 6–7 (n = 45)
Correct AB (%) DE (%) PR (%)
Wrong
Don’t know 59.46 45.95 67.57
30.63 51.35 21.62
9.91 2.70 10.81
Chart 12.3 Posttest cumulative results
214 T. Djambong et al.
decomposition, which was very low at the start of the project, increased by 5%
overall. This was also seen in both pattern recognition (+17%) and algorithmic
thinking (+5%) over the start of the project. Surprisingly, comprehension of decom-
position, which was not good at the start of the project, was actually worse by 5%.
While data flow did not increase beyond 50% correct response, there was an 11%
increase in correct responses over the start of the project with a 14% increase in
incorrect responses.
The next step in our analysis of the results was to establish our null hypothesis
that the exposure to the learning material we presented during the study had no
effect on the student’s’ cognitive abilities, specifically on their computational think-
ing skills. The data was cleaned to remove student test results that would skew
analysis away from those who were actively engaged in learning. There were many
results that showed the same response for all questions, those that “Don’t Know” a
disproportionate amount of time, and those whose responses effected a pre/post dif-
ference of over 50%. Testing the students before this exposure would set a baseline
of student’s skills. Following exposure to computational learning material, we then
tested the students again with the same style of tests. These two sets of values were
then used to calculate their average and standard deviation in use of a pair-wise two-
tailed test (T-Test) to validate our null hypothesis. This was calculated for grades 4,
5, 6, and 7 as shown in Tables 12.15, 12.16, and 12.17, respectively.
The results of the T-Test showed a significant result (p < 0.05) in each of the
grades. Correct responses showing that our hypothesis was false, and that the results
of the testing showed a significant change in the pre- and post-responses of the students.
Table 12.15 Analysis of pre–post change (n = 5)
Grade 4 Pre/Post Avg Pre/Post StdDev Pairwise 2-tailed H0 = no change
False
Correct 5.6/9.0 2.7/1.9 0.021 True
Wrong 12.4/11.4 2.2/2.2 0.508 True
Don’t know 5.0/2.6 3.9/3.7 0.185
Table 12.16 Analysis of pre–post change (n = 5)
Grade 5–6 Pre/Post Avg Pre/Post StdDev Pairwise 2-tailed H0 = no change
False
Correct 3.1/2.2 3.9/3.0 0.020 True
Wrong 3.6/2.6 4.3/3.3 0.109 True
Don’t know 0.6/1.0 0.9/1.3 0.016
Table 12.17 Analysis of pre–post change (n = 9)
Grade 7 Pre/Post Avg Pre/Post StdDev Pairwise 2-tailed H0 = no change
False
Correct 8.7/10.7 2.3/1.9 0.012 True
Wrong 11.4/10.8 3.5/3.2 0.506 True
Don’t know 2.9/1.9 3.0/1.9 0.305
12 Measurement of Computational Thinking in K-12 Education… 215
Table 12.18 Sample quotes from teacher interviews
Descriptors’ codes Sample quotes
Coding as easy task
Coding as difficult “We should have been coding a long time ago, starting in grade 1”
task
“I find coding very hard, I think I would need more training, but I’m
Engagement willing to do this given the reaction of the students and how fast they have
Attention picked it up”
Big success in
technology class “The students are more engaged with this than any other subject”
“I’ve never seen my students so quiet and sitting still for an entire class”
“Even children with attention problems are excelling at coding, it seems
to be a place they can focus their attention and be creative”
Table 12.19 Sample quotes from student interviews
Descriptors’ codes Sample quotes
Coding as easy task
Coding as difficult task “I really like coding, it’s fun”
“Coding is hard at first, but then after you do some more it gets
Engagement easier...”
Attention “I like the challenges and getting the answers, it is fun...”
Big success in technology “Working with robots is fun, you can see your code work”
class “I want to do this next year”
This indicates that the students were able to develop computational thinking skills
as measured by our tests through exposure of the learning materials. The T-Test also
showed that our test’s ability to measure significant results from the student’s Wrong
and Don’t Know results was shown to be false from the p value greater than 0.05 in
all grades. This mixed result is reason for further refinement of our test design.
During and after the study, teachers were interviewed with questions regarding their
observations about coding and their students. The following quotes were captured
during these interviews (Table 12.18).
The overall response of the teachers was very positive for the changes they saw
in their students as a result of exposure to coding activities. The students were also
interviewed with similar questions regarding their experience with coding whose
results are captured in Table 12.19.
We can see from the responses of the students the overall learning experience
was a positive one. Of interest is the response to coding as a difficult task indicating
that students are aware that coding can be hard, but that they can also succeed if they
persevere. The complexity of coding, or more precisely debugging, may indeed be
a catalyst for the development of patience and perseverance, a quality highly
researched and associated with success in later life.
216 T. Djambong et al.
12.5 Discussion
12.5.1 Assessment of CT Development
The main goal of the second stage was the elaboration and validation of a CT skills
assessment tool, using the tasks that target abstraction, decomposition, algorithmic
thinking, as well as pattern recognition. The selection of tasks consisted of some of
the Bebras competition items [29]. The results of pre- and posttests highlight a cer-
tain balance between the average scores obtained by question, with the difficulty
level predicted by designers for each proposed question. This complies with the
claim of the validity of the prediction criterion of Bebras questions, which stipulates
that the success rate for a particular question can be used to describe the difficulty
level of that question ([52], p. 133). However, we cannot ignore the case that some
question average scores have not reflected the difficulty levels predicted by the
designers. Hence, the need to properly conduct a validation process of questions
requiring a larger-scale study.
We recognize several limitations in our study that are inherent to the format of
the tests: paper-and-pencil and multiple-choice. For instance, the teacher mentioned
to us that the fact of switching from working at computers during the class hours to
filling-in paper forms could be demotivating for some students. Moreover, multiple-
choice does not produce a record of the work done by the students leading to the
selection of an answer, which could also be done randomly in some cases. As we
said before, it would be important to conduct interviews with students to learn how
they arrive to the answer. Also, the use of online tools for administering the test
could be an option to consider.
From more general perspective of the findings, students were exposed to a self-
paced learning environment that provided the ability to communicate and collabo-
rate with peers to plan and develop strategies to complete their tasks. Finally,
students were actively involved in their learning activities by designing, creating,
testing, exploring, discovering, analyzing, tinkering, trial and error, refining, debug-
ging, and finding solutions to the tasks. The role of teacher was to actively engage
students in this reflective work by questioning their learning process and challeng-
ing them to go deeper in their solutions without giving ready to use information.
The third stage of the study was focused on the development and validation of a
newly elaborated cognitive tool that could measure the changes in each of CT com-
ponent development in students over the time. The test results showed that students
who were more engaged in trying to think deeply when answering questions of the
test, instead of answering them randomly, showed that computational thinking
development was occurring more significantly. The use of data flow items in both
the testing and in tools was found to be intuitive and easy to learn by the students.
While Vizwik was not used, the Wonder platform for Dash and Dot robots provided
evidence, both in test results and in student interviews, that this model of coding
was valuable and learnable at a young age.
12 Measurement of Computational Thinking in K-12 Education… 217
This study has shown that an effective evaluation of skills related to CT such as
abstraction, algorithmic design, decomposition, or pattern recognition should be
based on a complementary set of measurement tools [35, 55, 56]. These assessment
tools should be able to measure different aspects (cognitive and noncognitive
aspects of learning CT) of the skills to be evaluated. Innovation here could be the
combination of several complementary measurement tools rather than the use of a
single measuring instrument. These complementary assessment tools should also:
(1) cut across coding or programming platforms [16] and (2) call for dynamic infor-
mation that can accurately reflect learners’ abilities and progression over time [56].
12.5.2 E valuation of CT in Curriculum
From a pedagogical point of view, this implies that practices aimed at developing
the acquisition of skills related to computational thinking can also alternate
between different types of complementary pedagogical practices (computer-based
activities versus computer unplugged activities, disciplinary, interdisciplinary
and transdisciplinary approaches through programming, and coding tasks or not)
based on complementary learning environments (visual or tangible programming
environment, game or application design environment, and simulation and model-
ing environments).
It is clear that the learning tool design of affordances, scaffolding capabilities,
and user experience are able to facilitate the development of skills of computational
thinking. The present study suggests that there is a need for further investigations in
order to demonstrate the fact that the affordances of such tools are likely to structure
and enrich students’ thinking as well as their ability to solve problems through
social interactions (Vizwik uses a social network model) and more or less complex
situated learning activities. However, from our testing, it was not possible to iden-
tify the specific CT skills that each type of programming environment in particular
would be likely to promote. The ability to identify these skills could be a subject of
future investigation.
Regarding the innovation itself, our study examined several practices of doing
activities that involved programming tasks (visual or textual) in a context of
technology-b ased course at the middle and high-school level. Our study reveals
several general observations regarding this aspect from institutional, pedagogical,
and techno-instrumental perspectives.
First, the fact of involving the CompéTICA network as mainstream of this study
does provide evidence of the complexity of building a synergy between different
partners, school boards, teachers, students, stakeholders, researchers, as well as
industry (which in our case was a group of designers and developers of computer
applications for education). This complexity brings upfront the issue of institution-
alization of innovative practice. The importance of dealing with this issue and chal-
lenges it brings is critical for the continuity and durability of the practice and its
further development which need to be analyzed from the social, political, economic,
218 T. Djambong et al.
and pedagogical perspectives. The question of educational leadership also seems to
be critical when we seek to promote the CT development across curricula as essen-
tial competence for all students.
The next aspect that need more discussion and research is related to the peda-
gogical as well as didactical perspective. First, the place of CT needs to be better
articulated in the curriculum as it concerns its content and teaching methods. Our
findings point at the importance of the initial training and professional development
of teachers who will need to take over the innovative practices and make them sus-
tainable and efficient.
At the techno-instrumental level, the question of technological (as well as physi-
cal) environments better suitable to support the development of CT also needs to be
discussed in more details. The tools that were used by the participants of our study
are still novel for the school context. How do we help students and teachers becom-
ing aware of the affordances that are not explicit? What type of digital competences
and other sets of skills (often referred as twenty-first century skills) are needed?
How to develop them to enable learners to more efficient use of functionalities of all
kinds of related technological tools (Vizwik, Scratch, Lego Mindstorms, Swift
Playground, etc.) so they would really become learning tools? What are the learning
outcomes in this respect? What are the assessment criteria of their impact on learn-
ing (in terms of pedagogy, didactics, economy, ergonomic, etc.) in order to better
grasp the attainment of objectives related to the development of CT? Those ques-
tions should be investigated at the next stages of our research and beyond.
12.5.3 D efining Computational Thinking
We began our study with reflecting on the experience of introducing a novel soft-
ware development tool that helps students quickly and easily develop prototypes of
mobile applications by means of Vizwik, a visual data flow programming platform.
First results, based on field notes and project structure analysis, have indicated a
possible positive impact on student engagement, motivation, collaboration, along
with an increased level of productivity while interacting with the Vizwik TLE. Also,
some teachers became active users and creators of didactical knowledge within
Vizwik online platform and proceeded to develop and share their lessons with oth-
ers through social media (similar to what happened, e.g., to GeoGebra community).
From the perspective of implementation of innovation, this was a positive indicator
of the ease of integration, learning, and early adoption. The results of this first stage
demonstrated, however, that more rigorous studies were needed. This prompted the
next stage of the project for which we planned to include case studies measuring the
impact on student learning. At a macro level of reflection on the experience as
example of innovative practice in schools, we notice a certain confusion at the con-
ceptual and semantic levels where the terms computer science, coding, and pro-
gramming are employed without a clear distinction and sometimes as being almost
synonymous. Is this possible to determine particular features of each concept and
12 Measurement of Computational Thinking in K-12 Education… 219
how they are reflected in a teaching and learning process? The learning activities in
the programming environment are they really targeting the development of CT or
rather are soliciting these abilities as drivers of the development of programming
and coding skills?
A more clear and concise definition of CT did not emerge from our study. We just
continued to study the tools, processes, and students’ engagement in CT develop-
ment, though our perspective of what CT is continued to evolve. While we were
ready to provide a limited definition of four properties, our findings indicate that
this limitation is based on an egocentric definition of the students’ development
while we have ignored the potential effects of the environment and social interac-
tions that would fit into a more inclusive exocentric definition.
12.6 Conclusion
A worldwide call for the development of CT has reached many educational systems,
including one in New Brunswick Canada. Since 2014, the CompéTICA partnership
network works, in collaboration with one provincial school districts, as well as with
software designers, on the issue of introducing some coding and programming
activities into middle and high school Broad-Based Technology Education curricu-
lum (6–12) [19]. Over 3 years, the study went through 3 stages following the DBR
methodology which allows for conducting a cyclic development of innovative prac-
tices including design, implementation, collecting and analyzing data, and then
redesign.
Our main focus was to track the process of development of CT during the activi-
ties of problem solving in the Vizwik TRE (which involved programing in a novel
data flow language) and in two other programming learning environments (Scratch
and Lego Mindstorms). We focused on the development of research tools that
helped us measure CT skills development in students while providing efficient feed-
back about their progress to teachers. While analyzing low-level aspects of compu-
tational thinking gives us some indication of how it’s being developed in the brains
of students, we also cannot ignore the other higher-level forms of learning such as
creativity social interaction discussion that goes on when students are working on
projects and learning in groups which reflect what actually happens in real life.
Therefore, much research remains to enable these other dimensions of learning to
be measurable and understand their overall impact in the development of students
and their computational thinking skills. This will enable us to further understand the
value of computational thinking and its application across other domains other than
simply learning to code.
This study thus suggests the implementation of innovative practices that are
based on a complementarity of pedagogical approaches, instruments for measuring
or assessing CT, and suitable technology-rich learning environments in order to
contribute to a comprehensive view of student learning.
220 T. Djambong et al.
Acknowledgments This ongoing study is being conducted with the help of the Canadian Social
Sciences and Humanities Research Council (Partnership Development Grant #890-2013-0062),
New Brunswick Innovation Foundation (2016 Research Assistantship Program), and le Secrétariat
aux Affaires Intergouvernementales Canadiennes du Québec (Programme de soutien à la
Francophonie Canadienne).
References
1. Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
2. Wing, J. M. (2008). Computational thinking and thinking about computing. Philosophical
Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences,
366(1881), 3117–3725.
3. Papert, S. (1980). Mindstorms: Children, computers and powerful ideas. York: New Basic
Books Inc.
4. Papert, S. (1996). An exploration in the space of mathematics education. International Journal
of Computers for Mathematical Learning, 1(1), 95–123.
5. Gretter, S., & Yadav, A. (2016). Computational thinking and media & information literacy: An
integrated approach to teaching twenty-first century skills. TechTrends, 60(5), 510–516.
6. Bocconi, S., Chioccariello, A., Dettori, G., Ferrari, A., & Engelhardt, K. (2016). Developing
computational thinking in compulsory education.
7. Webb, M., Davis, N., Bell, T., Katz, Y. J., Reynolds, N., Chambers, D. P., & Sysło, M. M.
(2017). Computer science in K-12 school curricula of the 2lst century: Why, what and when?
Education and Information Technologies, 22(2), 445–468.
8. Yadav, A., Good, J., Voogt, J., & Fisser, P. (2017). Computational thinking as an emerging
competence domain. In Competence-based vocational and professional education (pp. 1051–
1067). Cham: Springer International Publishing.
9. Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved
and what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.
1 0. Bower, M., Wood, L. N., Lai, J. W., Howe, C., Lister, R., Mason, R., & Veal, J. (2017).
Improving the computational thinking pedagogical capabilities of school teachers. Australian
Journal of Teacher Education, 42(3), 4.
11. Csizmadia, A., Curzon, P., Dorling, M., Humphreys, S., Ng, T., Selby, C., & Woollard, J.
(2015). Computational thinking: A guide for teachers. Computing at Schools
12. Faber, H. H., Wierdsma, M. D., Doornbos, R. P., van der Ven, J. S., & de Vette, K. (2017).
Teaching computational thinking to primary school students via unplugged programming les-
sons. Journal of the European Teacher Education Network, 12, 13–24.
13. Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the
field. Educational Researcher, 42(1), 38–43.
1 4. Kalelioglu, F., Gülbahar, Y., & Kukul, V. (2016). A framework for computational thinking
based on a systematic research review. Baltic Journal of Modern Computing, 4(3), 583.
15. Román-González, M., Moreno-León, J., & Robles, G. (2017a). Complementary tools for com-
putational thinking assessment.
1 6. Chen, G., Shen, J., Barth-Cohen, L., Jiang, S., Huang, X., & Eltoukhy, M. (2017). Assessing
elementary students’ computational thinking in everyday reasoning and robotics program-
ming. Computers & Education, 109, 162–175.
1 7. Gadanidis, G., Hughes, J. M., Minniti, L., & White, J. G. (2017). Computational thinking,
grade 1 students and the binomial theorem. Digital Experiences in Mathematics Education,
3(2), 77–96. https://doi.org/10.1007/s40751-016-0019-3
18. Karsenti, T., & Bugmann, J. (2017). Transformer l’éducation avec Minecraft? Résultats d’une
recherche menée auprès de 118 élèves du primaire. Montréal: CRIFPE.
12 Measurement of Computational Thinking in K-12 Education… 221
19. Freiman, V., Godin, J., Larose, F., Léger, M., Chiasson, M., Volkanova, V., & Goulet, M. J.
(2016). Towards the life-long continuum of digital competences: Exploring combination of
soft-skills and digital skills development.
20. Gauvin, S., Paquet, M., & Freiman, V. (2015). Vizwik–visual data flow programming and
its educational implications. In Proceedings of EdMedia: World Conference on Educational
Media and Technology Montréal, Canada (pp. 1602–1608).
21. Sáez-López, J. M., Román-González, M., & Vázquez-Cano, E. (2016). Visual programming
languages integrated across the curriculum in elementary school: A two year case study using
“scratch” in five schools. Computers & Education, 97, 129–141.
22. Gauvin, S., & Cox, P. T. (2011). Controlled dataflow visual programming languages, VINCI,
August, Hong Kong (pp. 345–352). New York, NY: ACM.
2 3. Whitley, K. N., Novick, L. R., & Fisher, D. (2006). Evidence in favor of visual representation
for the dataflow paradigm: An experiment testing LabVIEWs comprehensibility. International
Journal of Human-Computer Studies, 64, 281–303.
2 4. Selby, C., & Woollard, J. (2014). Refining an understanding of computational thinking.
Author’s Original, 1–23.
25. Brennan, K., & Resnick, M. (2012, April). New frameworks for studying and assessing the
development of computational thinking. In Proceedings of the 2012 annual meeting of the
American Educational Research Association, Vancouver, Canada (pp. 1–25).
26. Good, J., Yadav, A., & Mishra, P. (2017). Computational thinking in computer science class-
rooms: Viewpoints from CS educators. In Society for Information Technology & Teacher
Education International Conference (pp. 51–59). Association for the Advancement of
Computing in Education (AACE)
2 7. Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016).
Defining computational thinking for mathematics and science classrooms. Journal of Science
Education and Technology, 25(1), 127–147.
2 8. ISTE & CSTA. (2011). Operational definition of computational thinking for K-12 education.
2 9. Dolgopolovas, V., Jevsikova, T., Savulionienė, L., & Dagienė, V. (2015). On evaluation of
computational thinking of software engineering novice students. In Proceedings of the IFIP
TC3 Working Conference “A New Culture of Learning: Computing and next Generations”
(pp. 90–99).
3 0. Papert, S. (1991). Situating constructionism. In I. Harel & S. Papert (Eds.), Constructionism:
Research reports and essays 1985–1990 by the epistemology and learning research group,
MIT. Cambridge, MA: MIT.
3 1. Kafai, Y. B., & Burke, Q. (2014). Mindstorms 2: Children, programming, and computational
participation. Retrieved May, 1, 2016.
32. Turkle, S., & Papert, S. (1990). Epistemological pluralism: Styles and voices within the com-
puter culture. Signs: Journal of Women in Culture and Society, 16(1), 128–157.
33. Blanchard, S., Freiman, V., & Lirrete-Pitre, N. (2010). Strategies used by elementary school-
children solving robotics-based complex tasks: Innovative potential of technology. Procedia-
Social and Behavioral Sciences, 2(2), 2851–2857.
3 4. Benitti, F. B. V. (2012). Exploring the educational potential of robotics in schools: A system-
atic review. Computers & Education, 58(3), 978–988.
35. Román-González, M., Pérez-González, J. C., & Jiménez-Fernández, C. (2017b). Which cogni-
tive abilities underlie computational thinking? Criterion validity of the computational thinking
test. Computers in Human Behavior, 72, 678–691.
36. Mühling, A., Ruf, A., & Hubwieser, P. (2015). Design and first results of a psychometric test
for measuring basic programming abilities. In Proceedings of the workshop in primary and
secondary computing education (pp. 2–10). New York, NY: ACM.
3 7. Weintrop, D., & Wilensky, U. (2015). Using commutative assessments to compare conceptual
understanding in blocks-based and text-based programs. In ICER (Vol. 15, pp. 101–110).
3 8. Meerbaum-Salant, O., Armoni, M., & Ben-Ari, M. (2013). Learning computer science con-
cepts with scratch. Computer Science Education, 23(3), 239–264.
222 T. Djambong et al.
3 9. Zur-Bargury, I., Pârv, B., & Lanzberg, D. (2013). A nationwide exam as a tool for improving
a new curriculum. In Proceedings of the 18th ACM conference on innovation and technology
in computer science education (pp. 267–272). New York, NY: ACM.
4 0. Dagiene, V., & Futschek, G. (2008). Bebras international contest on informatics and computer
literacy: Criteria for good tasks. In Informatics education-supporting computational thinking
(pp. 19–30). Berlin, Heidelberg: Springer.
4 1. Izu, C., Mirolo, C., Settle, A., Mannila, L., & STUPURIENĖ, G. (2017). Exploring Bebras
tasks content and performance: A multinational study. Informatics in Education, 16(1), 39–59.
4 2. Basawapatna, A., Koh, K. H., Repenning, A., Webb, D. C., & Marshall, K. S. (2011).
Recognizing computational thinking patterns. In Proceedings of the 42nd ACM technical
symposium on computer science education (pp. 245–250). New York, NY: ACM.
4 3. Moreno-León, J., & Robles, G. (2015). Analyze your Scratch projects with Dr. Scratch and
assess your computational thinking skills. In Scratch conference (pp. 12–15).
44. Koh, K. H., Basawapatna, A., Bennett, V., & Repenning, A. (2010). Towards the automatic rec-
ognition of computational thinking for adaptive visual language learning. In Visual languages
and human-centric computing (VL/HCC), 2010 IEEE symposium on (pp. 59–66). New York:
IEEE.
45. Korkmaz, Ö., Çakir, R., & Özden, M. Y. (2017). A validity and reliability study of the compu-
tational thinking scales (CTS). Computers in Human Behavior, 72, 558–569.
46. Grover, S. (2011). Robotics and engineering for middle and high school students to develop
computational thinking. In annual meeting of the American Educational Research Association,
New Orleans, LA
47. Grover, S. (2015). “Systems of Assessments” for Deeper Learning of Computational Thinking
in K-12. In Proceedings of the 2015 Annual Meeting of the American Educational Research
Association (pp. 15–20).
4 8. Grover, S., Cooper, S., & Pea, R. (2014). Assessing computational learning in K-12.
In Proceedings of the 2014 conference on Innovation & Technology in Computer Science
Education (pp. 57–62). New York, NY: ACM.
49. Gunn, C., & Peddie, R. (2008). A design-based research approach for eportfolio initiatives.
Hello! Where are you in the landscape of educational technology? Proceedings Ascilite
Melbourne 2008
50. The Design-Based Research Collective. (2003). Design-based research: An emerging para-
digm for educational inquiry. Educational Researcher, 5–8.
5 1. Freiman, V., & Lirette-Pitre, N. (2009). Building a virtual learning community of problem
solvers: Example of CASMI community. ZDM, 41(1–2), 245–256.
5 2. Vegt, W. (2013). Predicting the difficulty level of a Bebras tasks. Olympiads in Informatics, 7,
132–139.
53. Dagiene, V., & Stupuriene, G. (2015). Informatics education based on solving attractive tasks
through a contest. KEYCIT 2014: key competencies in informatics and ICT, 7, 97.
54. Dagiene, V., & Stupuriene, G. (2016). Bebras-a sustainable community building model for the
concept based learning of informatics and computational thinking. Informatics in Education,
15(1), 25.
5 5. Grover, S. (2017). Assessing algorithmic and computational thinking in K-12: Lessons from a
middle school classroom. In Emerging research, practice, and policy on computational think-
ing (pp. 269–288). Cham: Springer International Publishing.
56. Shute, V. J., Sun, C., Asbell-Clarke, J. (2017). Demystifying computational thinking.
Educational Research Review.
Chapter 13
Computational Thinking in the Context
of Science and Engineering Practices:
A Self-Regulated Learning Approach
Erin E. Peters-Burton, Timothy J. Cleary, and Anastasia Kitsantas
Abstract Computational thinking has often been overlooked in the K-12 settings,
particularly in the Next Generation Science Standards (NGSS). In this chapter, we
present a social cognitive self-regulated learning approach for infusing computa-
tional thinking into teaching settings using science and engineering practices of
NGSS. Self-regulated learning related to computational thinking is viewed as a
goal-directed process whereby a learner is required to identify a problem, examine
relevant data to inform a solution, develop a solution, and evaluate the solution.
Illustrations on how self-regulated learning cycles can become mechanisms to
model and assess student computational thinking as students are engaged in science
and engineering practices are provided.
13.1 Introduction
Enhancing content knowledge of primary and secondary school students has been
the central focus of local and national curriculum initiatives and policies for the past
couple of decades [1, 2]. Most educators would agree, however, that a quality edu-
cation is not simply the transmission or dissemination of factual knowledge. Rather,
it is a multilayered phenomenon that integrates content-driven instruction with
innovative pedagogical practices that promote independent problem-solving, self-
directed thinking, and learning outside of the classroom.
Broadly speaking, problem-solving represents a process through which individu-
als use certain methods in an orderly fashion to develop solutions to a given problem.
E. E. Peters-Burton (*) · A. Kitsantas 223
George Mason University, Fairfax, VA, USA
e-mail: [email protected]; [email protected]
T. J. Cleary
Rutgers, The State University of New Jersey, Piscataway, NJ, USA
e-mail: [email protected]
© Springer International Publishing AG 2018
D. Sampson et al. (eds.), Digital Technologies: Sustainable Innovations for
Improving Teaching and Learning, https://doi.org/10.1007/978-3-319-73417-0_13
224 E. E. Peters-Burton et al.
According to Jonassen [3], problem-solving varies along three different dimensions
including problem type (e.g., complexity), problem representation (e.g., context,
modality), and individual differences (e.g., learner ability to solve problems).
Because of the latter dimension, which involves cognitive, affective, and motiva-
tional/volitional differences, instructional supports should focus on developing mul-
tiple representations of problems along with teaching learners to self-regulate
problem-solving. Thus, learners need to construct a mental model of the situation
and through self-regulation manipulate the problem space to move closer to a solu-
tion [3]. Problem-solving approaches are utilized within many fields including med-
icine, forensics, education, psychology, computer science, and engineering [4]. In
educational circles, problem-solving approaches have been applied at both a sys-
tem-wide level and a content-specific or academic skill level [5, 6]. At the system
level, school personnel use such frameworks to guide data-based decision-making
regarding student placement in specialized educational programs and/or for evaluat-
ing the effectiveness of those supports [6]. Regarding specific academic skills, prob-
lem-solving techniques are emphasized to enhance content area skills and
performance, such as solving mathematics word problems, performing science
investigations, or conducting the engineering design process [7, 8].
More recently, computational thinking (CT) skills, which collectively encapsu-
late a unique type of problem-solving, represents an emerging area of interest
among educators, with some arguing that CT should be emphasized across all con-
tent areas [9–12]. Computer scientists often conceptualize CT as a set of mental
processes involved in formulating problems or models that are represented in terms
of accurate and efficient algorithms [13, 14]. This skill set is viewed as being par-
ticularly relevant in today’s school contexts because of its strong alignment with
advanced, technological innovations and its potential to provide a structure for
thinking that facilitates the production of creative ideas to deal with dynamic, com-
plex situations [12, 15].
Another concept that has received extensive attention in educational circles over
the past couple of decades is self-regulated learning (SRL; [16, 17]). Across theo-
retical paradigms, SRL is typically conceptualized as a cyclical problem-solving
process whereby individuals self-generate thoughts, feelings, and actions to initiate,
sustain, or adapt their use of strategies to attain personal goals [18]. SRL skills are
routinely recognized as a core component of a twenty-first century skill set because
they directly influence achievement [19–21] and are applicable to virtually all aca-
demic and learning activities that students encounter during the school aged and
college years. In fact, researchers have examined the link between SRL skills and
processes and human functioning in terms of academic success [17], psychological
and health functioning [22], sports accomplishments [23, 24], and teacher develop-
ment and training [25, 26].
Similar to SRL, the concept of CT has been gaining much attention for its poten-
tial value and applicability to various academic contexts and situations [10–12, 27].
Although the literature base on SRL strategies and interventions within K-12 con-
texts is more robust than CT approaches at this point, we believe that integrating CT
and SRL principles within a single conceptual framework offers an opportunity to
13 Computational Thinking in the Context of Science and Engineering Practices 225
examine the CT process with greater nuance and to hypothesize about potential
methods for teaching and assessing CT.
We begin this chapter by detailing the nature and core aspects of SRL theory and
principles. We then provide an overview of CT and underscore how the process of
CT is largely governed by the implementation and use of various SRL principles
and processes. The chapter culminates with an illustration of how CT and SRL
principles can be integrated within a context of science and engineering practices,
followed by a discussion of the implications of our integrated model for future
research and practice.
13.1.1 F oundations of Self-Regulated Learning
Over the past couple of decades, SRL principles have been used in school contexts
to support student learning. Research has shown that frequent and quality use of
SRL strategies can lead to optimal outcomes across many different activities, such
as academic studying [28, 29], classroom interaction [30, 31], use of instructional
media [32], investigations in STEM-related areas [20, 33], athletics [34, 35], and
music [36].
There are many distinct theories of academic SRL (see Boekaerts, Pintrich, &
Zeidner [16] or Schunk & Zimmerman [37] for a comprehensive review). Regardless
of the particular model, there is widespread agreement that academic SRL is largely
a goal-directed process whereby a person establishes a goal, develops and imple-
ments a strategic plan, and then gathers and evaluates data to determine goal prog-
ress. SRL can be thought of as a feedback or data-based process in that feedback
about prior performances (generated by the self or provided by others) is used to
help make adjustments or modifications to improve future performances. Thus, SRL
is a highly fluid and dynamic process in that people will often adapt their strategic
plans or their goals when learning is not on track.
In this chapter, we use a social-cognitive framework of SRL. From this perspec-
tive, SRL is a complex process characterized by a three-phase cyclical feedback
loop [18]. In basic terms, this model conveys how individuals approach a task (i.e.,
forethought phase), the strategies they use and how they think during learning (i.e.,
performance phase), and how they evaluate and react to their performance (i.e., self-
reflection). More specifically, the forethought phase refers to influential processes
that set the stage for action, such as analyzing tasks and setting process-oriented
goals (e.g., asking students to think about prior experiences with recognizing pat-
terns). The performance phase includes processes that occur during attempts to
learn, such as implementation of the task and metacognitive monitoring (e.g., ask-
ing students to monitor their progress in seeing commonalities among a variety of
patterns). The self-reflection phase includes processes that occur after performance
efforts, such as making self-judgments about how one performed and why one per-
formed that way (e.g., asking students to determine how their use of CT processes
226 E. E. Peters-Burton et al.
align with their goals). The SRL process is considered cyclical in nature in that self-
generated or external feedback is used to refine and improve learning over time.
13.1.2 C omputational Thinking
Over the past decade, researchers have grappled with defining and understanding
the concept of CT [27, 38]. Many computer scientists have noted that computational
thinking has a long history. However, Wing’s [11] discussion of this term and her
argument for universally applying this concept was a major factor in stimulating a
renewed interest in understanding and applying this term to education. Over the past
decade, several definitions of CT and descriptions of the core characteristics or fea-
tures of CT have been put forth [13, 14, 27, 38]. Across definitions, there appears to
be some consensus that CT represents a type of process or set of thinking processes
that involves the use of computational practices and methods to develop algorithmic
models to develop solutions to problems. For the purposes of this chapter, however,
we focus primarily on computational practices because they speak directly about
student behaviors in the classroom and because these practices can be easily con-
ceptualized as types of goal-directed, regulatory skills or processes. These compu-
tational practices include several distinct activities that may consider the benefits
and limitations of the resources in an environment, reformulation of a difficult prob-
lem into one that can be solved, and recursive thinking about correctness, efficiency,
and aesthetics [11].
Wing’s initial definition sparked much interest and debate regarding its essence
and defining characteristics. In their concise review of different perspectives and
definitions of CT, Grover and Pea [39] found that several components or skills com-
prise CT that, collectively, have the potential to influence school curriculum (see
Table 13.1). These CT components or skills are actions that students can utilize
when engaged in traditional content areas or subjects offered in a school curricu-
lum. For example, structured problem decomposition can help students identify cru-
cial variables to isolate and measure in order to investigate the mechanics of a
biological system. Debugging and systematic error detection can assist students in
finding defects in investigation design when anomalous data occurs. Central to these
skills is abstraction, which is a cognitive process that involves making inferences or
generalizations based on a set of related details or information.
The focus of learners engaging in CT is on the process through which they attempt
to develop accurate and efficient computational models or algorithms for a given
problem. Similar to SRL paradigms mentioned in the previous section, CT is not
merely a linear or simple process. That is, after identifying a problem, students do
not engage in a single iteration of using a strategy to arrive at a correct solution.
Engaging in CT, like SRL, involves engaging in series of distinct albeit intercon-
nected processes: making a series of judgments as one develops and refines potential
solutions to the problem; deploying tactics to iteratively conceptualize problems; and
13 Computational Thinking in the Context of Science and Engineering Practices 227
Table 13.1 Computational practices and description
Skills Description
Abstractions and pattern Examining a group of patterns and describing them in a way that is
generalizations clear and efficient
Systematic processing of Using heuristics to make sense of an event
information
Symbol systems and Accurately and concisely portraying an often abstract event with a
representations simplified concrete object
Algorithmic notions of Management of data using a specified procedure
flow control
Structured problem Breaking down a complex problem or system into parts that are
decomposition easier to understand
Iterative, recursive, and Ability to repeat thinking in cycles to meet a goal (iterative), think
parallel thinking about your thinking (recursive), and ability to focus thinking in
specific directions (parallel)
Conditional logic Asserting that the occurrence of one event depends on the occurrence
of another
Efficiency and Consideration of hindering and beneficial factors involved in a
performance constraints process
Debugging and Methodical process for finding and reducing defects
systematic error
detection
most importantly, identifying patterns in data, such as points of divergence and con-
vergence and then troubleshooting problems during the process.
Because novice learners are often unaware of all the tactics required in CT prac-
tices and may not possess the skills to effectively manage and direct their thinking
and behaviors during CT, a framework that integrates the CT process with the ways
in which students manage and regulate their thinking and behaviors during CT
engagement is needed. As noted previously, much like any complex problem-
solving activity, such as diagnostic reasoning, CT entails an integration of pro-
cesses, such as abstraction, decomposition, separation of concerns, problem solving,
using randomization, designing systems, transforming, and simulating solutions.
For novices, or those with limited experience and knowledge in using CT skills, the
melding of these skills or practices can be quite overwhelming. A major thrust of
this conceptual paper is to illustrate that by viewing CT as a set of interrelated, goal-
directed regulatory processes, educators may be better able to provide instruction
that simultaneously teaches students the CT skills and regulatory competencies
needed to support their planning, monitoring, evaluation, and adaptation of these
CT skills.
A few researchers have attempted to introduce CT to teachers with the goal of
infusing it into their lesson plans. For example, Yadav et al. [40] developed a one-
week computational thinking module aimed at introducing CT principles to a group
of preservice teachers (n = 100). Preservice teachers took part in the one-week
course during one of their courses. Pretest and posttest surveys were administered
to assess preservice teachers’ views on computing and computational thinking. In
228 E. E. Peters-Burton et al.
general, pretest and posttest survey analysis suggests that the module was success-
ful in creating awareness of computational thinking—and its potential in facilitating
academic learning.
More recently, Yadav et al. [41] examined the effects of CT training modules on
preservice teachers’ understanding of CT and attitudes toward computing. A total of
294 (n = 153 control) preservice teachers participated in the study. Treatment par-
ticipants received a one-week module on CT as part of one of their courses while
control participants (different sections of the same course) did not receive the mod-
ule on CT. Differences between groups were assessed in terms of a CT quiz (three
open-ended questions) and computing attitude questionnaire. In general, treatment
participants had a more nuanced understanding of CT while compared to control
participants. Results from the attitude survey were mixed as no differences were
found in comfort or interest in computing. Overall, these findings suggest that more
intervention research and teacher professional development is needed on how to
integrate CT into a classroom setting.
13.2 The Intersection Between Self-Regulated Learning
and Computational Thinking
To begin our discussion of the merging of CT and SRL processes, we remind read-
ers that SRL is defined as a goal-directed process in which individuals select and
use, monitor, and evaluate the effectiveness of task-relevant strategies to achieve
their goals. Specifically, self-regulated individuals not only select and use strategies
that will help them meet the demands of a given activity, they monitor their use of
these strategies and make adaptations and adjustments when they do not reach their
goals [18]. Because CT involves a similar process of iteratively refining and tweak-
ing the nature of algorithms to solve a given problem, we believe that there is much
conceptual and practical value in educators viewing CT as a type of self-regulatory
event (see Fig. 13.1).
At a broad level, CT and SRL are both goal-directed activities designed to opti-
mize success on a particular problem or task. Although CT can be used to study a
wide range of natural phenomena, the underlying goal of this process is to develop
a computational model or algorithm that consistently yields predictable outcomes
and that is quite efficient in generating such solutions. To attain this goal, computa-
tional thinkers select and use a series of computational practices or strategies (e.g.,
abstraction, iterative or recursive thinking, pattern matching, generalization, abstrac-
tion, debugging, and error detection) to help develop and refine their computational
thinking repertoire (see Table 13.1). Although broader in scope, the use of these
computational practices to produce an ideal algorithm is similar to situations when
students use prediction or summarization strategies to optimize reading comprehen-
sion, or when they use a number line or draw a picture to represent algebra word
problems. That is, an effective computational thinker, much like a self-regulated
13 Computational Thinking in the Context of Science and Engineering Practices 229
Performance phase
Self-instruction: Articulate
steps in the algorithm used to
solve problem
Attention focusing:
Decompose the problem to
determine most important
factors in solving the problem
Self-recording: Record errors
and successes in steps toward
solution to monitor progress
Metacognitive monitoring:
Mentally tracking progress to
enhance awareness so that
steps toward the solution are
intelligible, logical, systematic,
and evidence based
Forethought phase Task: Self-reflection phase
Goal setting: Develop an Determine if Self-evaluation: Examine
efficient and effective efficiency and performance
algorithm to solve a problem there is a constraints in algorithm for
Planning: Identify key species difference in problem solving process
to demonstrate patterns bird migration Causal attribution: Use
Self-efficacy: Recall prior patterns in the conditional logic to
successes or vicarious past 20 years determine the causes of
successes with identifying successes and errors in
patterns algorithm for problem
Task value: Identify the value solving process
of identifying differences over Self-satisfaction: Affect
time in order to take action to
improve situation that results following the
Goal orientation: Seek
mastery and be willing to evaluative judgements
make mistakes in order to Adaptive inference: Find
develop the most efficient
and effective algorithm to other heuristics that may be
solve problem
helpful in refining algorithm
Fig. 13.1 Integrated model of self-regulated learning and computational thinking to solve a prob-
lem (adapted from [18])
learner, will deploy specific computational practices, as needed, to help them reach
the goal of developing a solution to a problem.
The process of CT, however, does not simply involve a linear process of using
specific computational methods to arrive at an effective algorithm. It entails a
dynamic, fluid, and nonlinear process that shifts and evolves as individuals e ncounter
challenges and obstacles. It is important to recognize that CT activities are similar
to other types of learning tasks in that students will invariably encounter challenges,
areas of confusion, and mistakes or errors. To engage in effective CT, students need
230 E. E. Peters-Burton et al.
to possess an awareness of their strategic attempts to solve a given problem, to con-
tinuously evaluate the quality and effectiveness of their computational models, and
to make decisions about which computational practices to utilize or adapt. The
“internal processes” of CT can easily be mapped on to the self-observation, self-
judgments, and self-reaction processes of the three-phase cyclical feedback loop
highlighted in the previous section ([18]; see Fig. 13.1).
To maintain awareness of progress towards developing an effective algorithm,
individuals need to self-observe or to monitor or mentally track “specific aspects of
their own performance, the conditions that surround it, and the effects that it pro-
duces” ([18], p. 19). The information that individuals gather about their own behav-
iors or the outcomes from using a particular algorithm are used by individuals to
evaluate the overall effectiveness of their algorithms for solving a given problem.
To make these types of evaluative judgments, computational thinkers use their goal
(i.e., to develop an accurate and efficient algorithm) as the standard against which to
evaluate their progress. When the quality of one’s computational model is less than
optimal (i.e., leads to mistakes or inconsistencies or is highly inefficient), it signals
to an individual that he or she needs to either deploy new computational methods or
strategies to refine their algorithm or to refine or adapt existing ones.
After individuals make judgments about “success” or “failure” in their computa-
tional thinking activity, SRL theorists would argue that they will naturally begin to
make other types of judgments and self-reactions such as attributions and adaptive
inferences. If a biology student is attempting to find an unfamiliar organism’s food
source and she realizes her algorithm is faulty in some way, for example, if she does
not include all potential food sources for the organism in her investigation, she
needs to ask herself about the potential reasons why it was problematic (i.e., attribu-
tions) and the steps or actions that she can take to correct it or improve (i.e., adaptive
inferences).
In short, CT activities in which individuals engage can be conceptualized as a
regulatory event that involves goal-setting (“What am I trying to accomplish when
engaging in this activity?”), strategic planning and use (“Which computational
thinking strategies will help to develop an effective and efficient algorithm?”), mon-
itoring (i.e., mentally tracking one’s behaviors, use of computational practices, and
outcomes produced by an algorithm), evaluation (“Is my algorithm accurate, effi-
cient, and optimal?”), and adaptation (“What is wrong with my computational
thinking repertoire and how can I effectively adapt or change it?”). For example,
suppose Linda, an 11th grade student, is presented with a problem that needs to be
solved, such as finding a process to determine genetically modified seeds from those
that are not genetically modified. To arrive at a correct solution, Linda will deploy a
series of strategies to further refine and clarify the problem to facilitate her progress
towards the solution or the goal of creating a protocol that will identify genetically
modified seeds.
13 Computational Thinking in the Context of Science and Engineering Practices 231
13.3 The Use of SRL to Promote CT in Classrooms
Given the overlap between SRL and CT and because both of these skill sets are
linked to a problem-solving process, we believe that an SRL framework can be use-
ful for understanding the specific processes of computational thinking. Because an
SRL framework explicates a series of well-defined processes, teachers can use SRL
to help students organize information and skills, to plan solutions to a defined prob-
lem, and to reflect and adapt their problem-solving approach to optimize the devel-
opment of algorithms or computational models. Through SRL training, students can
learn to become more effective problem-solvers rather than merely reacting to
teacher instructions. In a sense, SRL is a way to manage the conceptual thinking
process: an algorithm, which analyzes thinking processes for the most effective path
from problem identification to solution generation.
Teachers can help students become “regulated” computational thinkers by
prompting and guiding them to monitor and self-generate feedback about their CT
skills and to provide students with additional information regarding the processes
they need to refine when engaging in this process. Measures of SRL, such as SRL
microanalysis [42], can be useful as both instructional prompts and as an assess-
ment of student progress. SRL microanalysis, which is a type of structured inter-
view assessment protocol, examines students’ regulatory processes (goal-setting,
planning, monitoring, reflection) as they engage in specific learning tasks or activi-
ties. Because this type of assessment generates qualitative, process-oriented infor-
mation about how students engage in a task, teachers can use this approach and the
corresponding data it yields to generate “actionable feedback” for students. That is,
we understand the essential characteristics of CT, but do not yet fully understand
how to teach or train students to do it. Given that SRL microanalytic questions
reveal information about how students set goals, monitor their progress, engage in
self-evaluation, and adapt strategies to achieve their goals, the use of microanalytic
protocols can be extremely useful in analyzing “regulation-oriented” CT. In other
words, a close examination of SRL processes of students while engaged in CT will
naturally provide clues and information about the quality and nature of students’ CT
skills that can subsequently be used by teachers to assess student growth. The imple-
mentation of SRL microanalysis as a prompting technique is discussed later in this
chapter in the context of science and engineering practices.
13.3.1 T eaching Computational Thinking in the Context
of Science and Engineering Practices
Wing [11] argued for infusing CT into a variety of contexts because it is a common
way humans think and because it complements and combines mathematical and
engineering thinking [11]. Other researchers [39, 43] also underscore for the impor-
tance of teaching CT, but also point out the lack of definition of CT. With the advent
232 E. E. Peters-Burton et al.
of the most recent national science standards in the USA, the Next Generation
Science Standards (NGSS), there is both good and bad news on these fronts. The
good news is that the NGSS make an intentional effort to integrate CT into science
classrooms. The bad news is that the new standards add no clarity to the definition
of CT [44].
The Next Generation Science Standards framework introduces a dramatic change
from previous standards, building from the National Science Education Standards
[2], and the Benchmarks for Scientific Literacy [1]. The National Science Education
Standards and Benchmarks for Scientific Literacy, from which the state standards in
the United States are derived, mainly focused on content standards and had few sci-
ence process skills, such as observation and making conclusions, as expectations.
The NGSS extends beyond a content-only focus by taking a more holistic approach
so educators can more closely align school science with authentic scientific and
engineering practices.
The NGSS differs from the prior curriculum standards by weaving together three
strands of learning domains: Disciplinary Core Ideas, Crosscutting Concepts, and
Science and Engineering Practices. Disciplinary Core Ideas (DCIs) set the factual
content standards that are the building blocks of the curriculum. They are the core
ideas that run across science and engineering disciplines and include topics such as
energy, ecosystems, and the Earth’s place in the universe. The DCIs are meant to be
taught in increasing complexity and depth as students progress to higher grade lev-
els. The Crosscutting Concepts (CCCs) represent broad themes that help to group
the other standards into ideas that are essential to science, such as cause and effect,
systems, and patterns. The Science and Engineering Practices (SEPs) explain the
disciplinary process skills that embody the development of science content knowl-
edge. There are a total of eight SEPs, one of which is CT. These practices include:
asking questions and defining problems; developing and using models; planning
and carrying out investigations; analyzing and interpreting data; using mathematics
and computational thinking; constructing explanations and designing solutions;
engaging in argument from evidence; and obtaining, evaluating, and communicat-
ing information.
SEPs are the skills that are central to learning how models of natural phenomena
are generated from evidence in the discipline of science and can be considered the
mechanisms behind the ways that the broader themes of the Crosscutting Concepts
work. The SEPs can be considered the linkage between the factual DCIs and the
thematic CCCs. For example, the SEP of analyzing and interpreting data can be
connected to the theme of cause and effect because certain data, such as those col-
lected in an experimental design, can be used to establish a cause and effect rela-
tionship, while correlational analysis cannot be used to determine cause and effect
relationships. Further, although a cause and effect relationship may be seen in data
patterns from an investigation, the DCIs must be understood to explain how the
cause and effect relationship can occur. An increase in the salinity of water would
cause an increase in the density of the solution as observed in an analysis of data
from an experiment, but the explanation of why this relationship happens relies on
knowledge about the arrangement of molecules of the solvent and the solution and
13 Computational Thinking in the Context of Science and Engineering Practices 233
a conceptual understanding of density. The ability to accurately analyze data allows
a learner to connect the concepts of molecule arrangement and density while adding
to the suite of evidence about the format of cause and effect relationships.
Since the SEPs are the lynchpin to integrating the three learning domains in
NGSS, it is important to understand how they can be taught effectively. Many of the
eight SEPs from NGSS have been taught in science classrooms in the past, such as
asking questions and defining problems; planning and carrying out investigations;
analyzing and interpreting data; constructing explanations and designing solutions;
and obtaining, evaluating, and communicating information [2]. However, the SEPs
of developing and using models, using mathematics and computational thinking,
and engaging in argument from evidence have been less prominent in prior stan-
dards and, therefore, are new to students and teachers in science. Of the three SEPs
that have been less emphasized in classrooms, arguably using mathematics and
computational thinking has been the least adequately defined for classroom prac-
tice. To complicate matters, the NGSS definition of computational thinking is not
aligned to researchers’ definitions of computational thinking.
NGSS defines “Using Mathematics and Computational Thinking” as follows:
Although there are differences in how mathematics and computational thinking are applied
in science and in engineering, mathematics often brings these two fields together by
enabling engineers to apply the mathematical form of scientific theories and by enabling
scientists to use powerful information technologies designed by engineers. Both kinds of
professionals can thereby accomplish investigations and analyses and build complex mod-
els, which might otherwise be out of the question ([7], p. 65).
It is noteworthy that NGSS is not aligned to the ways researchers typically define
CT. The CT definition in the NGSS focuses mainly on mathematical thinking and
computing devices rather than embracing the practices illustrated in Table 13.1,
such as efficiency and performance constraints and iterative, recursive, and parallel
thinking. Because NGSS standards will guide future curriculum development across
the United States, it is alarming that there is no resemblance to the ways that the
research base defines CT and the practice named CT in the NGSS. More efforts will
need to be in place to reinforce a stronger message to educators about the necessary
components of CT. Approaching the teaching of CT through the framework of SRL
is one way that CT can be strengthened in curriculum. Strategically approaching
how students are engaged in the learning task requires knowing what key features
of CT educators want students to know. CT provides a tangible, goal-oriented pro-
cess from which to instruct and assess student learning. SRL theory shows promise
as an analytical framework from which to parse out how CT is being accomplished
by students, as well as for prompting educators to pinpoint critical features of CT
that can be integrated with SRL processes during a specific science activity.
In this section, we examine how an integrated model of CT, SRL, and SEP work
together. Figure 13.2 illustrates this integrated model with respect to a specific SEP,
Practice 4: Analyzing and Interpreting data. The subprocesses in the three phases of
SRL align with CT practices as seen in Fig. 13.1 and are reinforced when the task
being regulated is to apply analysis and interpretation of data in a scientific way.
Teaching novices about integrated understandings is difficult because it cannot be
234 E. E. Peters-Burton et al.
Performance phase
Self-instruction: Articulate
methods for validity and
reliability
Attention focusing:
Decompose the steps for
analysis and determine ways to
improve validity and reliability
Self-recording: Record
successes and errors regarding
validity and reliability of the
data analysis
Metacognitive monitoring:
Mentally tracking progress to
enhance awareness so that
steps toward the solution are
intelligible, logical, systematic,
and evidence based
Forethought phase Task: Perform Self-reflection phase
Goal setting: Generate valid analysis and Self-evaluation: Examine
and reliable evidence that leads interpretation efficiency and performance
to a scientific claim constraints for valid and
Planning: Identify strategies of bird reliable data analysis (such
that increase validity and migration data as collection techniques)
reliability of evidence for the past 20 Causal attribution: Use
Self-efficacy: Recall prior conditional logic to
successes or vicarious years determine the causes of
successes with data analysis successes and errors
Task value: Identify the value recorded
of data analysis that is valid Self-satisfaction: Affect
and reliable in generating that results following the
strong scientific knowledge evaluative judgements
Goal orientation: Adopt Adaptive inference: Find
mastery approach and be other heuristics that may be
willing to make mistakes and helpful in improving validity
learn from those mistakes to
improve the algorithm for valid and reliability
and reliable analysis
Fig. 13.2 Integrated model of SRL, CT, and SEP (adapted from [18])
taught holistically. Integrated ideas must be taught first by the components, so nov-
ices can master one process at a time as they integrate their new knowledge. Then,
much like CT, teachers can define these abstractions for novices, working with them
in multiple layers, and understand the relationships among the layers. Examples of
the application of SRL strategies as a tool used to decompose CT in the context of
an SEP (data analysis) can be seen in Table 13.2. Unpacking CT in the context of
performing a science practice, such as data analysis, is useful in explicitly informing
students how to use CT in a tangible way when they are engaging in science and
13 Computational Thinking in the Context of Science and Engineering Practices 235
Table 13.2 Examples of application of CT to research investigations in chemistry and Earth
science
Computational Application to data Application in chemistry Application in Earth
thinking concept analysis in science investigation: What factors science investigation:
investigation affect the rate of dissolving Interpret a stratigraphic
Break a problem a solute in a solvent? cross section
into parts or steps Break down into
measureable Determine the independent Identify all instances of
Recognize and variables variables that can be uplift, sedimentary
find patterns or measured and altered such accumulation,
trends Monitor for as size of solute particles, subsidence, folding,
patterns that temperature, types of faulting, intrusion,
Develop demonstrate substances, and rate of eruption, and erosion
instructions to consistency stirring
solve a problem or Create a heuristic Sedimentary formations
steps for a task to convert and Student notes that as they can be placed into two
analyze data to increase the particle size of broad groups: Clastic
Generalize answer research the solute, the rate of and precipitate
patterns and trends questions dissolving decreases
into rules, Use logic to organize
principles, or Using trends in Calculate the range of the sequence of
insights data, create a differences among trials per geologic events using
statement that variable to determine the the principle of
generalizes the tolerance for variances in a cross-cutting
relationships variable. Tolerance for relationships and the
between the variances may differ from principle of
variables variable to variable superposition
Looking across all variables Look across different
in solution, students note stratigraphic columns to
certain changes in variables find similar sequences
lead to increased solubility of geologic events
and other variables lead to
decreased solubility.
Students generalize into
relationships
engineering practices. Further, overlaying the SRL model on top of the decomposed
CT in the context of the SEPs gives students language in which to describe and
evaluate their own learning processes.
An approach to the integration of SRL, CT, and SEPs can be demonstrated from
the perspective of the SEP being taught and how it relates to SRL and CT. This
integration can be put into practice tangibly through the use of a suite of prompts,
such as the SRL microanalysis technique [42]. As evidenced by the proliferation of
SRL strategy publications directed toward teachers [45–47], teachers are in need of
SRL prompts that can be embedded in lessons that are already in use. The SRL
prompting strategies can equip teachers with tangible ways to communicate skills
needed to promote student CT during science investigations in school and to assess
student proficiency in SRL strategies. The prompts can be used in two different
ways, as an instructional tool and as an assessment tool. As an instructional tool, the
SRL prompts can give students clues to using CT when performing data analysis,
much like a mentor except in a text format. The SRL prompts can be instructive by
236 E. E. Peters-Burton et al.
Table 13.3 Sample instructional and assessment CT prompts aligned with self-regulated learning
phases for practice 4
Instructional prompt Assessment prompt CT aspect
Plan to represent your data in a table How did you break down the CT: Decomposition
that is organized to help you explain variables and organize your
what you see (SRL: Forethought—see table to communicate your
Fig. 13.1) findings?
Organize your data to explain evidence How did the organization of CT: Iterative,
to answer your research question (SRL: your data table help you recursive, and
Forethought—see Fig. 13.1) answer your research question? parallel thinking
Construct graphical displays of data to How does this representation CT: Representation
see overall relationships (SRL: help communicate
Performance—see Fig. 13.1) relationships?
Analyze your data to determine What steps did you take to CT: Abstractions
differences and similarities in your summarize patterns in your
findings (SRL: Performance—see data?
Fig. 13.1)
Determine if you have correlational or How are these findings related CT: Abstractions
causal evidence to answer your to correlational or causal
research questions. (SRL: Self- evidence?
reflection—see Fig. 13.1)
Identify limitations of your data How did you find the CT: Iterative,
analysis (SRL: Self-reflection—see limitations in your data recursive, and
Fig. 13.1) analysis? parallel thinking
guiding students to think computationally during any of the eight SEPs. An assess-
ment prompt can be used to gather student proficiency information for the teacher
after a student masters the SEP from prior instruction. Table 13.3 provides examples
of SRL prompts related to aspects of CT that can be used during NGSS Practice 4,
Analyzing and Interpreting Data. For example, a student is given the task to design
an investigation to determine the factors influencing the period of a pendulum (time
it takes to swing back and forth). The student is prompted to employ the CT practice
of decomposition when she is planning her data table so that her analysis is system-
atic and comprehensive. The prompt is provided as instruction so that the student
has some mentorship in setting goals to solve the problem. The student would
accomplish her goal by thinking about all of the variables that are involved in the
mechanics of a pendulum, such as length of the string, shape of the bob, and weight
of the bob. Then the student could design a table for collecting the data in an orga-
nized fashion, which will then lead to a systematic analysis. The student, after he or
she accomplishes mastery of the practice, can be assessed on this skill by using the
SRL prompt formed into questions, “How did you represent your table in an orga-
nized fashion?” and “How did the design of your data table help you accomplish a
thorough data analysis?”
13 Computational Thinking in the Context of Science and Engineering Practices 237
13.4 E ducational Implications and Future Directions
A key thinking practice that is not emphasized in the United States is CT. Not only
is CT underused in K-12 classrooms, it is glossed over and misaligned with the
research-based concept of CT when it has been emphasized in national standards
[44]. Building students’ CT is a powerful and necessary component of instruction
because it represents a universally applicable skill set and is currently a missed
opportunity in curriculum. Primary and secondary educators have minimal experi-
ence with teaching CT. Thus, not only is a clear definition of CT necessary in the
standards, we also need to use a learning theory from which to structure this type of
thinking. From our perspective, SRL theory provides the learning foundation and
approach from which CT skills can be taught. Constructing a mental toolkit of strat-
egies from various perspectives to solve problems and design systems results in
people who are better thinkers and learners [44]. As a result, in the last few years,
researchers [48] discussed the importance of computing education research and
argue that research in this area can grow by focusing on faculty, doctoral students,
conferences and journals, and funding. Further, Franklin [49] argued that the time
has come to provide adequate resources for computing education research.
One of the hallmark features of SRL is the premise that students are able to adapt
and adjust their strategies, behaviors, or beliefs as they attempt to complete a given
learning task or activity [19]. CT is similar in that individuals must constantly make
strategic adaptations during a given task, but is distinct in that individuals rely on
data generated from computer science and programming principles to facilitate
decision-making and adaptation.
In sum, students who engage in CT naturally engage in cyclical, regulatory
thinking. That is, they must possess knowledge of CT practices and tactics and then
develop plans to reach their goals (forethought), be able to effectively implement
and monitor their use of these CT practices (performance), and then evaluate and
react to feedback about their performance or progress towards the problem solution
(self-reflection). Further, because CT inherently involves an iterative, potentially
time-consuming processes, students must be motivated to engage in CT. Thus, it is
not enough to simply focus instruction on CT tactics and practices; one must also
consider the regulatory and motivational processes that underlie successful perfor-
mance on these activities.
Future research should focus on definitional issues of CT, which continue to
hinder research. Researchers warn of the potential downside of equating computer
science with computational thinking as doing so has a limiting effect on computa-
tional thinking [50, 51]. In addition, there is much to be known about assessing the
effectiveness of CT in classroom settings and investigating how CT skills can be
taught so learners develop competencies through self-regulated learning. CT and
SRL have the potential to enhance science and engineering education in K-12 set-
tings by providing guidance on problem-solving. It is clear that these processes are
key thinking processes in STEM education, but, similar to SRL principles, CT has
traditionally not been included in an already crowded STEM curriculum. Thus, we
238 E. E. Peters-Burton et al.
must find areas which both SRL and CT can be incorporated and used as a frame-
work of procedural knowledge to solve problems and efficiently learn domain-
specific content.
References
1. American Association for the Advancement of Science. (1993). Benchmarks for scientific lit-
eracy. New York, NY: Oxford University Press.
2. National Research Council. (1996). National science education standards. Washington, DC:
National Academy Press.
3. Jonassen, D. H. (2000). Toward a design theory of problem solving. Educational Technology
Research and Design, 48(4), 63–85.
4. Jones, B. F., Rasmussen, C. M., & Moffitt, M. C. (1997). Psychology in the classroom: A series
on applied educational psychology. Real-life problem solving: A collaborative approach to
interdisciplinary learning. Washington, DC: American Psychological Association. https://doi.
org/10.1037/10266-000
5. De Corte, E., Mason, L., Depaepe, F., & Verschaffel, L. (2011). Self-regulation of mathe-
matical knowledge and skills. In Handbook of self-regulation of learning and performance
(pp. 155–172). New York: Routledge. https://doi.org/10.4324/9780203839010.ch10
6. Reschly, A. L., Huebner, E. S., Appleton, J. J., & Antaramian, S. (2008). Engagement as
flourishing: The contribution of positive emotions and coping to adolescents’ engagement at
school and with learning. Psychology in the Schools, 45(5), 419–431. https://doi.org/10.1002/
pits.20306
7. National Research Council. (2012). A framework for K-12 science education: Practices,
crosscutting concepts, and core ideas, Committee on a Conceptual Framework for New K- 12
Science Education Standards. Board on Science Education, Division of Behavioral and Social
Sciences and Education. Washington, DC: The National Academies Press.
8. De Corte, E. (2011). Constructive, self-regulated, situated, and collaborative learning: An
approach for the acquisition of adaptive competence. The Journal of Education, 192(2/3),
33–47. https://doi.org/10.1177/0022057412192002-307
9. Rich, P. J., & Hodges, C. B. (2017). Emerging research, practice, and policy on computational
thinking. Cham, Switzerland: Springer.
10. Tatar, D., Harrison, S., Stewart, M., Frisina, C., & Musaeus, P. (2017). Proto-computational
thinking: The uncomfortable underpinnings. In P. J. Rich & C. B. Hodges (Eds.), Emerging
research, practice, and policy on computational thinking (pp. 63–84). Cham, Switzerland:
Springer International Publishing.
1 1. Wing, J. M. (2006). A vision for the 21st century: Computational thinking. Communications of
the ACM, 49(3), 33–35.
12. Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016).
Defining computational thinking for mathematics and science classrooms. Journal of Science
Education and Technology, 25(1), 127–147.
13. Aho, A. V. (2012). Computation and computational thinking. The Computer Journal, 55(7),
832–835. https://doi.org/10.1093/comjnl/bxs074
1 4. Wing, J. (2011). Research notebook: Computational thinking—What and why? The Link
Magazine, Spring. Pittsburgh: Carnegie Mellon University.
1 5. Curzon, P., & McOwan, P. W. (2016). The power of computational thinking: Games, magic
and puzzles to help you become a computational thinker. New Jersey: World Scientific.
1 6. Boekaerts, P. P., Pintrich, P. R., & Zeidner, M. (2000). Handbook of self-regulation (pp. 451-
502). San Diego, CA: Academic.
13 Computational Thinking in the Context of Science and Engineering Practices 239
17. Schunk, D. H., & Greene, J. A. (Eds.). (2018). Handbook of self-regulation of learning and
performance (2nd ed.). New York, NY: Routledge.
18. Zimmerman, B. J. (2000). Attaining self-regulation: A social-cognitive perspective. In
M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13–39). San
Diego, CA: Academic Press.
1 9. Cleary, T. J., Velardi, B., & Schnaidman, B. (2017). Effects of the self-regulation empower-
ment program (SREP) on middle school students’ strategic skills, self-efficacy, and mathemat-
ics achievement. Journal of School Psychology, 64, 28–42.
20. Peters, E. E., & Kitsantas, A. (2010). Self-regulation of student epistemic thinking in science:
The role of metacognitive prompts. Educational Psychology, 30(1), 27–52.
21. Kitsantas, A., & Zimmerman, B. J. (2009). College students’ homework and academic
achievement: The mediating role of self-regulatory beliefs. Metacognition and Learning, 4(2),
97–110. https://doi.org/10.1007/s11409-008-9028-y
22. Cleary, T. (Ed.). (2015). School psychology series. Self-regulated learning interventions
with at-risk youth: Enhancing adaptability, performance, and well-being. Washington, DC:
American Psychological Association. https://doi.org/10.1037/14641-000
23. Kitsantas, A., & Zimmerman, B. J. (2002). Comparing self-regulatory processes among nov-
ice, non-expert, and expert volleyball players: A microanalytic study. Journal of Applied Sport
Psychology, 14(2), 91–105. https://doi.org/10.1080/10413200252907761
24. Kolovelonis, A., Goudas, M., Dermitzaki, I., & Kitsantas, A. (2013). Self-regulated learning
and performance calibration among elementary physical education students. European Journal
of Psychology of Education, 28(3), 685–701. https://doi.org/10.1007/s10212-012-0135-4
2 5. Peters-Burton, E. E., & Botov, I. S. (2017). Self-regulated learning microanalysis as a tool to
inform professional development delivery in real-time. Metacognition and Learning, 12(1),
45–78. https://doi.org/10.1007/s11409-016-9160-z
2 6. Peters-Burton, E. E., Cleary, T. J., & Forman, S. G. (2015). Professional development con-
texts that promote self-regulated learning and content learning in trainees. doi: https://doi.
org/10.1037/14641-010
27. Denning, P. J. (2017). Remaining trouble spots with computational thinking. Communications
of the ACM, 60(6), 33–39. https://doi.org/10.1145/2998438
28. Cleary, T. J., & Zimmerman, B. J. (2004). Self-regulation empowerment program: A school-
based program to enhance self-regulated and self-motivated cycles of student learning.
Psychology in the Schools, 41(5), 537–550.
29. Thomas, J. W., & Rohwer, W. D., Jr. (1986). Academic studying: The role of learning strate-
gies. Educational Psychologist, 21, 19–41.
3 0. Rohrkemper, M. (1989). Self-regulated learning and academic achievement: A Vygotskian
view. In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning and academic
achievement: Theory, research and practice (pp. 143–167). New York: Springer.
31. Wang, M. C., & Peverly, S. T. (1986). The self-instructive process in classroom learning con-
texts. Contemporary Educational Psychology, 11, 370–404.
3 2. Henderson, R. W. (1986). Self-regulated learning: Implications for the design of instructional
media. Contemporary Educational Psychology, 11, 405–427.
3 3. Pape, S. J., Bell, C. V., & Yetkin-Özdemir, I. E. (2013). Sequencing components of mathemat-
ics lessons to maximize development of self-regulation: Theory, practice, and intervention.
In H. Bembenutty, T. J. Cleary, & A. Kitsantas (Eds.), Applications of self-regulated learning
across diverse disciplines: A tribute to Barry J. Zimmerman (pp. 29–58). Charlotte, NC: IAP
Information Age Publishing.
3 4. Kitsantas, A., Zimmerman, B. J., & Cleary, T. (2000). The role of observation and emula-
tion in the development of athletic self-regulation. Journal of Educational Psychology, 92(4),
811–817.
35. Kitsantas, A., & Zimmerman, B. J. (1998). Self-regulation of motoric learning: A stra-
tegic cycle view. Journal of Applied Sport Psychology, 10(2), 220–239. https://doi.
org/10.1080/10413209808406390
240 E. E. Peters-Burton et al.
36. McPherson, G. E., & Zimmerman, B. J. (2011). Self-regulation of musical learning: A social
cognitive perspective on developing performance skills. In R. Colwell & P. Webster (Eds.),
MENC handbook of research on music learning, volume 2: Applications (pp. 130–175).
New York, NY: Oxford University Press.
37. Schunk, D. H., & Zimmerman, B. J. (Eds.). (1998). Self-regulated learning: From teaching to
self-reflective practice. New York, NY: Guilford Press.
3 8. Grover, S., Pea, R., & Cooper, S. (2015). Designing for deeper learning in a blended computer
science course for middle school students. Computer Science Education, 25(2), 199–237.
39. Grover, S., & Pea, R. (2013). Computational thinking in K-12: A review of the state of the
field. Educational Researcher, 42, 38–43.
4 0. Yadav, A., Zhou, N., Mayfield, C., Hambrusch, S., & Korb, J. T. (2011). Introducing compu-
tational thinking in education courses. In Proceedings of the 42nd ACM technical symposium
on computer science education (pp. 465–470). Dallas, TX: ACM.
4 1. Yadav, A., Mayfield, C., Zhou, N., Hambrusch, S., & Korb, J. T. (2014). Computational
thinking in elementary and secondary teacher education. ACM Transactions on Computing
Education (TOCE), 14(1), 5–21.
42. Cleary, T. J. (2011). Emergence of self-regulated learning microanalysis: Historical over-
view, essential features, and implications for research and practice. In B. J. Zimmerman &
D. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 329–345).
New York: Routledge.
43. Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved
and what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.
44. NGSS Lead States. (2013). Next generation science standards: For states, by states.
Washington, DC: Achieve.
4 5. Johnson, C. C., Walton, J., & Peters-Burton, E. E. (in press). STEM road map for elementary
school: Transportation in the future. Arlington, VA: NSTA Press.
46. Peters, E. E. (2012). Developing content knowledge in students through explicit teaching of
the nature of science: Influences of goal setting and self-monitoring. Science & Education,
21(6), 881–898. https://doi.org/10.1007/s11191-009-9219-1
47. White, M. C., & DiBenedetto, M. K. (2015). Self-regulation and the common core: Application
to ELA standards. New York, NY: Routledge.
4 8. Cooper, S., Forbes, J., Fox, A., Hambrusch, S., Ko, A., & Simon, B. (2016). The importance of
computing education research. arXiv preprint retrieved at arXiv:1604.03446.
4 9. Franklin, D. (2015). Putting the computer science in computing education research.
Communications of the ACM, 58(2), 34–36.
50. Czerkawski, B. C., & Lyman, E. W. (2015). Exploring issues about computational thinking in
higher education. TechTrends, 59(2), 57–65.
51. Denning, P. J. (2009). The profession of IT beyond computational thinking. Communications
of the ACM, 52(6), 28–30.
Chapter 14
A Technology-Enhanced Pedagogical
Framework to Promote Collaborative
Creativity in Secondary Education
Manoli Pifarré and Laura Martí
Abstract This chapter proposes a technology-enhanced pedagogical framework
for collaborative creativity and explores its effects in secondary education. The
technology-enhanced pedagogical framework is built on sociocultural theory which
conceptualizes creativity as a social activity based on intersubjectivity and dialogi-
cal interactions. Dialogue becomes an instrument for the development of collabora-
tive creativity processes such as divergent and convergent thinking, distributed
leadership, mutual engagement, or group reflection. Two real secondary classrooms
followed the technology-enhanced pedagogical framework to solve collaboratively
a social challenge and find a novel and valuable solution for a community. The role
of technology in shaping collaborative creativity processes and students’ perception
about what specific collaborative and creative processes emerged during the project
was explored in these two case studies. Findings showed that the technology-
enhanced pedagogical framework scaffolded the development of key divergent pro-
cesses as, for instance, idea generation and new ways of thinking. Besides, students
reported the emergence of convergent processes such as selection and combination
of ideas, and they learned new ways of conveying and communicating ideas. Finally,
students highlighted the development of key learning to learn processes related
mostly to group reflection and mutual engagement.
14.1 I ntroduction
We live in changeable world, a world in motion. These changes can be seen at all
levels of our life and society. Many educational researchers highlight that it is under
these changeable circumstances that creativity has become an imperative in educa-
tion. These scholars claim that creativity helps us achieve our goals as individuals,
M. Pifarré (*) · L. Martí 241
Psychology Department, Universitat de Lleida, Lleida, Spain
e-mail: [email protected]; [email protected]
© Springer International Publishing AG 2018
D. Sampson et al. (eds.), Digital Technologies: Sustainable Innovations for
Improving Teaching and Learning, https://doi.org/10.1007/978-3-319-73417-0_14