Besides these, the service team profiling also showed that early and continuous
engagement of the teams are important in assessment of services. Early
engagement has been conducted on ad hoc bases, frequently and systematically.
Early engagement can help the teams pass assessment because it involve:
dissemination of about digital academy, list of resources and reading material to
the teams;
discovery of review;
seeing the demo/prototype during a F2F visit;
introducing the teams to more experience team;
organising a person to observe an assessment;
completion of content review; and
sharing of pre-assessment checklist.
The speaker concluded this speech by revising Figure 8, which explains the overall
service assessment process.
Figure 8: Overall service assessment process
Question & Answer
Question : How to handle silos between agencies?
Answer : A team should be build consisting officers from various agencies. The team
should identify objectives of implementing assurance in their services. With
this objectives, the team should get connected to build new initiatives and
maintain continuous communication.
36
3. SPEAKER SESSION PLENARY 3: “COGNITIVE TODAY, DIGITAL
TOMORROW: COGNITIVE ERGONOMICS APPLIED TO DIGITAL
GOVERNMENT” BY MS. HALIMAHTUN MOHD KHALID (PHD, CHFP, KMN),
DAMAI SCIENCES KUALA LUMPUR
The speaker began her presentation by expressing that the current level of service
concepts that are applied especially in the web-based government services and digital
services have to be looked into to be digitally ready. She disagreed with an article
published in the MIT Sloan Management Review titled, “Digital Today Cognitive
Tomorrow,” as the people have not reach that level yet, but should instead be
“Cognitive Today, Digital Tomorrow.” In order to be digital today, everything will have
to be in digital form while cognitive systems are systems we will be relying on in the
future, will be used to translate and interpret the feedback and responses received
from the people. She challenged whether we are even at that level yet with the current
systems in place.
As we are not at that level yet, she highlighted that we should take a step back,
understand the cognitive level, and address the needs of the users who will be using
these services. According to the speaker, based on the Handbook of Human-
37
Computer Interaction written by Marting G. Helander, in 1988, most of the concepts
that are being implemented came about from this book. Singapore started out early by
implementing e-Citizen while Malaysia are not even there yet. The speaker prescribed
the possibility of interaction with virtual humans, and the designs implement on
accepting these virtual assistants. Some of the examples of virtual assistants are
Emma, from the Department of Homeland Security USA, and Siri in the iPhone.
The speaker highlighted that the acceptance of these virtual assistants in a system
highly depends on understanding the culture, the people and their background. Based
on a project undertaken by the speaker from the US Air Force Grant, known as
THRUST (Trusting Humanoid Robots Undertake Social Tasks), these virtual
assistants trying to mimic a human, were equipped with natural dialogue, and dressed-
up like the people that they are communicating with, together with expressions and
hand gestures. The success or the failure of it highly depends on the interactivity with
the people when it provides the services, which is known as cognitive technology.
There are two types of robot partnership:
a. Mixed human robot partnership
b. Multi human-avatar partnership
From here, the speaker ascertained that what the future needs is learning workers
instead of knowledge workers. Learning workers are people who can interpret, learn
things and use big data, which makes data scientist an important group compared to
programmers.
Based on the Seven Pillars of Digital Government, which ensure a successful
transformation, the speaker disagrees that the user centered design should be the last
pillar. In fact, she accentuated that it should be the first pillar in the blue print as to
better understand the users. She stressed that the fundamentals of building good
applications, these interactive technologies should have:
a. Cognitive Ergonomics
b. Human Factors Engineering
c. Affective Engineering
The speaker defined cognitive ergonomics as the ability of a person to interact with
the environment, reasoning, perception to perceive and filter things out in their
38
memory. She also stressed that a person’s attention’s resources are very limited. The
speaker then went on to explained in detail the cognitive workings in a person on how
they perceived and accept view the services presented in the website. From there, the
speaker zoomed in on the understanding of the three type of processors that a human
has which are perceptual processor, cognitive processor and motor processor which
plays a role in determining the acceptability of the person towards the web services
offered in a website. She also highlighted the positive and negative points commented
by the website users by citing examples from Malaysia Digital Economy Corporation
(MDEC) and Federal Emergency Management Agency (FEMA). Some of the mental
issues raised for the website design are:
a. What mental model might you expect the user to have?
b. What mental model should they have?
c. How should you convey an appropriate mental model to the user?
d. How should you design the interface to reflect the mental model?
e. How do you represent the mental model?
In order to overcome these mental issues, the speaker recommended that web
services offered in the website be measured against Norman’s Action Cycle to
determine how wide is the gulf between the Gulf of Execution and the Gulf of
Evaluation. The wider the gulfs illustrate that the lower the achievability of the goals.
As a matter of fact, the amount of efficiency required by the user to achieve the goals,
and the satisfaction level that the user feels when they use the portal are the main
objectives that a website should have.
Besides that, the usability attributes in a website are best correlated with the LEEERS
Model, whose attributes are:
a. Learnability – Time and effort to reach proficiency
b. Effectiveness – Accomplishing goal-directed tasks to criterion performance
c. Efficiency – Level of performance relative to resources
d. Errors – Errors committed, and how to design for recovery
e. Retention – Memorizing a system for better performance next time
f. Satisfaction with the experience – User attitude during and after use
Furthermore, the user’s experience goals towards a website are satisfying,
entertaining, aesthetically pleasing, fun, helpful, rewarding, enjoyable, motivating and
39
support creativity. Meanwhile, from the services aspect in a website, the enhancement
can be increased with the Core Effect Circle (Russell, 2003), where the website should
be directed towards the positive affect of the circle like pleasant foreground, people
excited by the information instead of the negative affect which is dull and monotonous.
The Kano model of User Satisfaction can be used to garner the user’s satisfaction in
using the web services like an increase in personalization, visible search tools, good
error feedback, and a reduction in processing delay of the complex language and
image complexity.
In conclusion, every citizen interface, application and interactivity should have these
features:
a. Easy to understand and learn
b. Error tolerant
c. Flexible and adaptable
d. Appropriate and effective for the task
e. Powerful and efficient
f. Inexpensive
g. Portable
h. Compatible
i. Intelligent
j. Support social and group interactions
k. Trustworthy (secure, private, safe and reliable)
l. Information centered
m. Pleasant to use
40
C. Roundtable Design Sprint
The Roundtable Design Sprint, led by Mr. Marcus Foth was held in the afternoon
session where the delegates were divided into respective groups to discuss thirteen
(13) themes of service delivery sectors. MDEC and INTAN facilitated the session and
the themes discussed were as follows:
1. Urban Policy & Regulatory Reform
2. Diversity & Innovation
3. Business & Entrepreneurship
4. Sustainability & Climate Change
5. Transport & Mobility
6. Housing Affordability
7. Safety & Security
8. Open Data
9. Social Inclusion & Welfare
10. Culture & Tourism
11. Health & Wellbeing
12. Finance & Taxes
13. Education & Training
Each group were given a template of the USE-CASE EXPLORATION CANVAS as
shown in Figure 9. It covers two main sections – the Problem Space and the Solution
Space with respective points of area discussions. Each group then do a five minutes
presentation of their output.
The output from this Roundtable Design Sprint will be explored and refined by the
MDEC team, led by head of facilitators, Dr. Idyawati binti Hussein.
Refer ATTACHMENT 4 for some outputs of the Use-Case Exploration Canvas.
41
Figure 9: Use-Case Exploration Canvas (source: QUT Design Lab)
A. The Problem Space
i) Persona – The identified person/profile
ii) Goal – What is your persona trying to achieve?
iii) Pains – What are the problems and challenges they encounter?
iv) Gains – What would make it easier for them?
B. The Solution Space
v) Product/Service–What kind of product or service can we create that would
facilitate the gains?
vi) Resources–What kind of information, data, infrastructure, technology, etc.
could Government provide to allow us to deliver this product or service (in
a better way)?
vii) Role–What would be our role as a government team or as individual citizens
in delivering this product or service together with others?
viii) Stakeholders – Who are the stakeholders of our project? Who else should
be involved and for which purpose?
42
IV. Program Feedback
The feedback received from 88 delegates who were mostly from the managerial level,
that on average gave “good” rating on the program, a score range 4.2 to 4.5 out of
5 based on Likert Scale, for program’s effectiveness, content, learning method,
usefulness, increase in knowledge and skill, as well as the secretariat role in
organising the event, as shown in Chart 1.
Overall Program Rating
4.5
4.5
4.5
4.4
4.4
Likert Scale:
4.4 4.3
5 - Excellent
4 - Good 4.3
3 - Regular 4.3 4.3
2 - Poor
4.3 4.2
1 - Very Poor
4.2
4.2
4.1
Effectiveness Content Methodology Usefulness Knowledge Secretariat
and Skills
Chart 1: Overall Program Rating
43
It is noted that “good” rating, a score range of 4.1 to 4.3, is in reference to the
speakers, for their clear deliverables about knowledge and skills, create engaging
learning environment and interaction capabilities, as shown in Chart 2.
Overall Speakers' Rating
4.4
4.3
4.3
4.3 4.2
Likert Scale:
5 - Excellent
4.2
4 - Good
3 - Regular 4.2 4.1
2 - Poor
1 - Very poor
4.1
4.1
4.0
Deliverables Engaging Interaction
Chart 2: Overall Speakers' Rating
44
On speaker’s delivery style, content and topic connection to the program’s
theme, the delegates’ average rating for Speaker 1: Prof. Foth is 4.3, Speaker 2:
Mr. Bond is 4.0 and Speaker 3: Dr. Halimahtun is 4.5. Meanwhile for the
Roundtable Design Sprint led by Prof. Foth, the average score is 3.6. More
information on Speaker’s Session Rating is shown in Chart 3.
Speaker's Session Rating
4.8
4.6
4.6 4.5
4.5
4.4
4.3 4.3 4.3
4.2
Likert Scale:
5 - Excellent 4.0 4.1
4.0
4 - Good
3.9 Delivery Style
3 - Regular 3.8
2 - Poor 3.7 Content
3.7
1 - Very poor 3.6 3.6 Connection
3.4
Avg = 4.3 Avg = 4.0 Avg = 4.5 Avg = 3.6
3.2
3.0
Prof. Foth Mr. Bond Dr. Halimahtun Roundtable
Design Sprint
(Prof. Foth)
Chart 3: Speaker's Session Rating
Refer ATTACHMENT 5 for Program’s Evaluation Form.
45
V. Conclusion
The Digital Leadership Experience (DLE) 2017 received positive feedback from the
stakeholders. DLE2017 represented INTAN’s, MDEC’s and MAMPU’s diligent effort
to provide awareness to the public sector regarding the need to enhance efficiency
and productivity through the adoption of innovative technology towards becoming
more citizen centric in government service delivery to achieve better quality of living
for citizens in accordance with the 11th Malaysia Plan.
LIST OF ATTACHMENTS
ATTACHMENT 1: PROGRAM BROCHURE
ATTACHMENT 2: WELCOMING REMARKS SENIOR DEPUTY DIRECTOR INTAN
ATTACHMENT 3: IAP2’s PUBLIC PARTICIPATION SPECTRUM
ATTACHMENT 4: USE-CASE EXPLORATION CANVAS DISCUSSION OUTPUTS
ATTACHMENT 5: PROGRAM EVALUATION FORM
ATTACHMENT 6: PHOTO GALLERY
Report Prepared by:
Rapporteur Working Committee
Digital Leadership Exchange (DLE2017)
Cluster for Innovative Management Technology (i-IMATEC)
National Institute of Public Administration (INTAN)
October 2017
____________________________________________________________________
46