POLITEKNIK SULTAN IDRIS SHAH
KEMENTERIAN PENGAJIAN TINGGI MALAYSIA
Copyright © 2021
All rights reserved. This book or any portion thereof may not be reproduced in any manner
whatsoever including electronic, mechanical, photocopying, recording, etc. without the
express written permission of the Author and Publisher of Politeknik Sultan Idris Shah.
Politeknik Sultan Idris Shah
Sg. Lang, 45100 Sg Air Tawar
Selangor
03 3280 6200
03 3280 6400
http://www.psis.edu.my
SYNOPSIS
This book explains a quick overview about resource management used in operating
system with format that students can quickly grasp. This text introduces the subject
concisely, describing the complexities of operating systems without going into
intricate detail.
System resources such as Central Processing Units (CPUs), random access memory,
secondary storage devices, external devices and so on are allocated to the processes,
networks and applications are known as resource management in most operating
system.
To understand resources management in OS, we need to understand process
management and CPU scheduling and relationship between these two. Process
management involves various tasks like creation, scheduling, termination of
processes, and a dead lock. The objective of the CPU scheduling is basically performed
to reduce the waiting time and encourage multiprogramming by using CPU scheduling
algorithm.
So that, users can open many applications at one time and to fulfil the role of modern
computers today.
To our Head of Department, Mr Hairulanuar Rosman,
To our Programme Coordinator, Mr Mohd Farhan ‘Uzair Paisan,
To our mentor, Mrs Zainora Kamal Ludin.
Thank you for your encouragement and support.
CONTENT
PART 01
INTRODUCTION
PART 02
PROCESS MANAGEMENT
PART 03
CPU SCHEDULING
PART 04
DEADLOCK
PART 05
TUTORIAL
TABLE OF CONTENT
27
PART 1 PART 2
INTRODUCTION PROCESS MANAGEMENT
Process Concept
• Operating system
• Operating System as a • The process
• Process Control Block
resource manager • Process state
Overview of Resource Management Process Scheduling
• Process scheduling
19 • CPU scheduling
PART 3 Policies
CPU SCHEDULING • Scheduling queue
Basic Concept • CPU scheduling
• Types of CPU
• CPU–I/O Burst Cycle
• CPU Scheduler scheduling
• Types of CPU
35
Scheduler
• Dispatcher PART 4
CPU Scheduling Criteria DEADLOCK
CPU Scheduling Algorithm
• FIFO • Definition
• RR • Necessary Condition
• Priority • Deadlock Solution
• SJF
• SRT 40
• Multilevel queue
• Multilevel-feedback PART 5
EXERCISES
queue
INTRODUCTION PART O1
Introduction
• Operating system
• Operating System as a resource
manager
Overview of Resource Management
1|Page
O1
INTRODUCTION
2|Page
1.1 INTRODUCTION
Before we go further into resource management topic, we need to understand the
function of operating system itself.
1.1.1 Operating System
In general, an operating system is a programme that enables an electronic devices or
hardware to communicate with humans or another user. It has a user-friendly
interface that allows the user to get the most out of their electronic devices or
hardware. Android and Windows iOS are two examples of operating systems.
An operating system is a piece of software that controls all of the hardware and
software on a computer. It also acts as a bridge between the computer user and the
computer hardware, serving as a platform and foundation for application
programmes.
1.1.2 Operating System as a Resource Manager
An operating system operates by managing all the process of the resources such as
memory information, input output devices, processor and file so that the electronic
devices can work in excellent manner.
The operating system acts as the resource manager for the resources and allocates
them to specific programs to complete the task as necessary.
3|Page
1.2 OVERVIEW OF RESOURCE MANAGEMENT
In all operating systems, Resource Management is the process of allocating system
resources (such as random-access memory, the Central Processing Unit (CPU),
secondary storage devices, external devices, and so on) to specific processes, threads,
and applications. This is typically done to achieve high throughput, service quality,
fairness, and balance across all processes. Several scheduling algorithms are required
to the processes and share the system resources equally as required to complete the
task. This scheduling level task is a fundamental requirement for systems to perform
multitasking and multiplexing.
To complete a task, efficient process management is required to manage and control
all execution processes within the CPU. A programme in execution is referred to as a
process, and it must pass through several process states before it is completed. CPU
scheduling determines the rotation of the process to be executed in CPU using an
appropriate algorithm. These algorithms will be discussed in part 03 in this book.
Relationship of the process management and CPU scheduling can be summarized in
Figure 1.1.
4|Page
Figure 1.1 : Overview of resource management in operating system
5|Page
PROCESS MANAGEMENT PART O2
Process Concept
• The process
• Process Control Block
• Process state
Process Scheduling
• Process scheduling
• CPU scheduling
Policies
• Scheduling queue
• CPU scheduling
• Types of CPU
scheduling
6|Page
O2
PROCESS
MANAGEMENT
7|Page
2.1 PROCESS CONCEPT
Process is a program in execution and process execution must complete in sequential
manner. The operation of the process is controlled with the help of Process Control
Block (PCB) and need particular resources. The example of resources are CPU time,
memory, files, and Iinput/Output devices to complete the required task. These
resources are typically allocated to the process while it is executing.
In operating system, processes may be in one of 5 states. During changing state, an
interrupt can be occurred.
In modern operating systems, most of the processes must running all time to
maximize the CPU utilization. This is called multiprogramming, and this is the role of
process scheduler.
2.1.1 The Process
A process is an example of a program in execution. It includes current activity, as
represented by contents of the processor register and program counter values.
Process generally also include process stacks, which contain temporary data (such as
return addresses, local variables and function parameters), and data sections, which
contain global variables.
Figure 2.1. shows a process architecture which is divided into four sections and its
explanation.
8|Page
Figure 2.1 : Process architecture
2.1.2 Process Control Block (PCB)
PCB stands for Process Control Block. It is a data structure that is used by the Operating
System to keep all the information for every process. An integer Process ID (PID) is as
identity of the PCB. This identity is used to store all the information of the process
required, so that all the running process can be trace. PCB can be illustrated in Figure
2.2.
• Process State – New, ready, running, waiting, terminated
• Process ID/number – identity of the PCB.
• CPU registers and Program Counter - These need to be saved and restored
when swapping processes in and out of the CPU.
• CPU-Scheduling information - Such as priority information and pointers to
scheduling queues.
• Memory-Management information - page tables or segment tables.
• Accounting information - user and kernel CPU time consumed, account
numbers, limits, etc.
• I/O Status information - Devices allocated, open file tables, etc.
9|Page
Figure 2.2 : Process Control Block (PCB)
2.1.3 Process State
The state changes as a process executes. The state of process can be defined as
current activity of the process. 5 states can be happened as follows:
• New - the process is being created
• Ready - the process is waiting to be assign to a CPU
• Running - instructions are being executed
• Waiting - the process is waiting for some event to occur (such as I/O
completion or signal reception)
• Terminated - the process execution has completed
During process changing state, one important role can be occurred which is interrupt.
Interrupts are electronic signals sent by external devices to the CPU. External devices
usually Input/Output devices. Interrupt signal will tell the CPU to stop its current
activity and execute the other requested system parts. The activities of process state
can be illustrated as Figure 2.3
10 | P a g e
Figure 2.3 : Process state
2.2 PROCESS SCHEDULING
The objective of multiprogramming is to have some process running at all times to
maximize the CPU operation. To meet these objectives, the process scheduler is
needed to select an available process for program execution on a core. The process
scheduler must meet these objectives by implementing suitable policies for swapping
processes in and out of the CPU.
Only one process can run in a single CPU/core at one time while in a multicore system
can run numerous processes at one time. The excess running process must wait until
the core is free and rescheduled if there are more processes than the core. This is
called as the degree of multiprogramming if has various number of processes are
currently running in memory.
11 | P a g e
2.2.1 Process Scheduling Policies
As we know in a multiprogramming ecosystem, there are many jobs/processes can be
run or executed at one time. Before the operating system can schedule them, it needs
to resolve three limitations of the system which is:
1. When finite number of resources happened (such as printers, disk drives, and
tape drives);
2. The resources that can’t be shared with another job once they’re assigned
(such as printers);
3. some resources require operator intervention when they can’t be reassigned
automatically from job to job (such as tape drives).
Figure 2.4 : Process Scheduling Policies
12 | P a g e
2.2.2 Scheduling Queues
The Operating System keeps all Process Control Block (PCBs) in Process Scheduling
Queues. A separate queue for each of the process states is maintain by OS. A PCBs
with a process in the same execution state are placed in the same queue. PCB will
unlink from its current queue and moved to its new state queue when the state of a
process is changed.
The Operating System maintains the following important process scheduling queues
as follows:
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
• Device queues − The processes which are blocked due to unavailability of an
input/output device represent this queue
Figure 2.5 : Scheduling Queue
As processes enter the system, it will reside in ready queue, where they are ready and
waiting to be executed on a CPU’s core. This queue is generally stored as a linked list.
A pointer in ready-queue header will link to the first PCB in the list, and each PCB that
have pointer field pointing to the next PCB in the ready queue. The system also
includes other queues. During a process at a CPU core, the process will executes for a
while until its finish the task, or interrupt or waits for the occurrence event or as the
13 | P a g e
completion of an Input/Output request. Since devices run a bit slower than
processors, the process will have to wait for the Input/output available. Processes that
are in waiting for event to occur such as completion of Input/output are placed in a
wait queue.
Figure 2.6 : Ready queue and wait queue
Figure 2.7 illustrates the queueing diagram of process scheduling. There are two types
of queues that is ready and wait. The circles represent queue resources, and the
arrows show process flow. A new process is originally queued in ready queue. It sits
there till it is chosen for execution. Once the process has been assigned into CPU core,
one of four things can happen as below:
• An Input/Output request could be issued, followed by an I/O wait queue.
• The process might generate a new child process and then wait for the child to
finish.
• An interrupt may force the process out of the core, or its time slice could
expire, returning it to the ready queue.
14 | P a g e
• In the first two circumstances, the process is returned from the ready queue
to the waiting state. When a process quits, it is withdrawn from all queues
and its PCB and resources are reallocated.
Figure 2.7 : Queueing-diagram representation of process scheduling.
2.2.3 CPU Scheduling
Process selection in ready queue and assign to a CPU core is the role of the CPU
scheduler. The new process must be selected by the CPU scheduler frequently the
CPU. An input output bound process may execute for only a few milliseconds before
waiting for an input output request. Although input output bound process will require
a CPU core for longer periods, the scheduler is doubtful to allow the core to a process
in an expanded period. It will by force remove the CPU from a process and reschedule
another process to run. Therefore, the CPU scheduler executes at least once every 100
milliseconds.
Some operating systems have a swapping method, where a process can be “swapped
out” from memory to disk and it saves its current status. After that the process
restored by “swapped in” from disk to memory back and its execution can be
continued where it left off. By removing a process from memory temporarily it
advantageously reduces the degree of multiprogramming.
15 | P a g e
2.2.4 Types of Process Schedulers
Process scheduling is handled by a special software called a scheduler. There are three
types of Process Schedulers which are Long Term schedulers, Short Term schedulers
and Medium-Term schedulers. Table 1 show differences between three types of the
schedulers.
Table 1 : Differences between types of process schedulers
16 | P a g e
17 | P a g e
CPU SCHEDULING PART O3
Basic Concept
• CPU–Input/Output Burst Cycle
• CPU Scheduler
• Types of CPU Scheduler
• Dispatcher
CPU Scheduling Criteria
CPU Scheduling Algorithm
• FIFO
• RR
• Priority
• SJF
• SRT
• Multilevel queue
18 | P a g e
O3
CPU
SCHEDULING
19 | P a g e
3.1 BASIC CONCEPT
In a single-core operating system, just one process can run. Others must wait for the
CPU core to be freed. Multiprogrammed operating systems rely on CPU scheduling.
Switching the CPU between processes makes the computer more practical. In this
chapter, basic CPU-scheduling concepts and the operation of CPU scheduling
algorithms were introduced.
Multiprogramming keeps multiple processes in memory at once. When a process that
has to wait, the operating system will switches the CPU to another process. This
principle of keeping the CPU active and busy is extended to multicore systems.
Scheduling of this kind is a fundamental in modern operating-system with almost all
computer resources are scheduled before use. CPU is a major computer resource, and
its scheduling is important as it is central of OS design.
3.1.1 CPU–Input/Output Burst Cycle
The success of CPU scheduling depends on a process attribute. The process execution
cycle alternates between CPU execution and input/output wait. Process execution
starts with a CPU burst, then an I/O burst, then another CPU burst, and so on. An end-
of-execution system request ends the final CPU burst.
20 | P a g e
Figure 3.1 : CPU–Input/output Burst Cycle
3.1.2 CPU Scheduler
Whenever the CPU becomes idle, the CPU scheduler will select a process in ready
queue in memory to allocates them into CPU by using suitable CPU scheduling
algorithm.
3.1.3 Types of CPU scheduling
The decision of CPU scheduling results may occur under the following four situation:
1. As a process changes from a running state to a waiting state (for an
input/output request or a waiting application for termination of one of the
child processes).
2. As a process switches from the running state to the ready state (for example,
when an interrupt occurs).
21 | P a g e
3. As a process switches from the waiting state to the ready state (for example,
completion of input/output).
4. As a process terminates/completes
There is no choice in terms of scheduling for situations 1 and 4 and new process must
be selected for execution. However, there is a choice in situations 2 and 3.
When Scheduling takes place only under situations 1 and 4, this scheduling scheme is
known as non-pre-emptive; otherwise, the scheduling scheme is pre-emptive. These
types of CPU scheduling can be clearly illustrated in Figure2.3 (Process state)
3.1.4 Differences between pre-emptive and non-pre-emptive
Table 2 shows the differences between pre-emptive and non pre-emptive scheduling
Table 2:Preemptive vs Non-preemptive
3.1.5 Dispatcher
The Dispatcher is another component involved in the CPU scheduling function. A
dispatcher is a module that gives CPU control to a process that has been chosen by a
short-term scheduler. This function entails the following steps:
• Switching context
• Switching to user mode
• Navigating to the correct location in the user programme to restart it from
where it left off the last time.
22 | P a g e
Because it is used during each process switch, the dispatcher should be as fast as
possible. Dispatch Latency is the time it takes a dispatcher to stop one process and
start another. Dispatch Latency can be explained using the diagram below:
Figure 3.2 : Dispatch latency
Scheduler and Dispatcher are related with an operating system's process scheduling.
The main distinction between a scheduler and a dispatcher is that the scheduler
chooses one of several processes to execute, whereas the dispatcher allocates CPU
resources to the scheduler's chosen process.
Schedulers and dispatchers are involved in the scheduling of operating system
processes. The main distinction between a scheduler and a dispatcher is that the
scheduler chooses a process from a list of processes to execute, whereas the
dispatcher allocates CPU resources to the process chosen by the scheduler.
23 | P a g e
3.2 CPU SCHEDULING CRITERIA
Five criteria that were involve in scheduling algorithm as follows:
Figure 3.3 : CPU scheduling criteria
24 | P a g e
3.3 CPU SCHEDULING ALGORITHM
To decide which process to execute first and which process to execute last to achieve
maximum CPU utilization, computer scientists have defined some algorithms, they
are:
Figure 3.4 : CPU scheduling algorithm
25 | P a g e
3.3.1 First In First Out Scheduling (FIFO)
It's the simplest algorithm. In this approach, the process that asks the CPU receives it
first. This scheduling method uses a FIFO queue.
Example
Consider the following set of processes.
- Show the result in Gantt Chat diagram
- Count the average waiting time
Processes are assumed has arrive in order.
Process Burst Time
(ms)
P1 24
P2 3
P3 3
Answer
P1 P2 P3
0 24 27 30
Waiting time each process = total waiting time process – burst time
P1 : 24 - 24 = 0
P2 : 27 - 3 = 24
P3 : 30 – 3 = 27
The waiting time is 0 milisecond for the process P1, 24 miliseconds for P2, and 27
milisconds for P3.
Average waiting time (AWT): (0+24+27)/3 = 17.
26 | P a g e
3.3.2 Round Robin Scheduling (RR)
Time-sharing systems is generally used the Round Robin (RR) scheduling method. Like
FCFS scheduling, Round Robin (RR) scheduling adds pre-emption, allowing the system
to switch processes. Each process in RR scheduling is given a quantum of time in RR
scheduling. Generally, length of time quantum is from 10 to 100 milliseconds in length.
Once a process has run for the specified time, it is preempted by another process. This
technique is simple and easy to implement as it is starvation-free so that all processes
get a fair portion of CPU.
Example
Consider the following set of processes.
- Show the result in Gantt Chat diagram
- Count the average waiting time. Given quantum time = 4
Processes are assumed has arrive in order.
Process Burst Time
(ms)
P1 24
P2 3
P3 3
Answer
P1 gets the first 4 ms. P1 gets preempted after the first quantum since it takes 20
miliseconds. The CPU is given to the following process (P2). P2 quits before time
quantum expires because P2 only needs 3 miliseconds. Then the CPU is given to next
process in queue (P3).
27 | P a g e
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
P1 : 30 -24 = 6
P2 : 7 -3 = 4
P3 : 10 - 3 =7
Average waiting time (AWT) = (6+4+7)/3 = 5.6ms
3.3.3 Priority Scheduling
Each process that has a priority will gets the CPU first. Equal-priority processes are
FCFS/FIFO scheduled. However, the value 0 can be highest or lowest priority. Some
system uses low numbers to represent low priority and vice versa.
Example
Consider the following set of processes.
- Show the result in Gantt Chat diagram
- Count the average waiting time.
Processes are assumed has arrive in order.
Process Burst Time Priority
(ms)
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
28 | P a g e
Answer
P2 P5 P1 P3 P4
16 18 19
01 6
P1 : 16 – 10 = 6
P2 : 1 – 1 = 0
P3 : 18 – 2 =16
P4 : 19 – 1 = 18
P5 : 6 – 5 = 1
Average waiting time (AWT) = (6+0+16+18+1)/5 = 8.2 ms
3.3.4 Shortest Job First Scheduling (SJF)
Shortest Job First scheduling works on the process with the shortest burst time or
duration first. It can be happen in 2 types:
1. Non Pre-emptive - which it take from CPU until it release, See Arrival Time
first then allocate shortest burst time
2. Pre-emptive - Depend on priority demand on Arrival Time then followed by
shortest burst time
Example
Consider the following set of processes for Shortest Job First
- Show the result in Gantt Chat diagram
29 | P a g e
- Count the average waiting time.
Process Arrival time Burst Time
(ms) (ms)
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Answer P2 P4 P3 26
P1 8 12 17
0
Turnaround time = Completion time – arrival time
Waiting time = Turnaround time – burst time
Process Arrival Burst time Completion Turnaround Waiting
time (ms) time (ms)
P1 (ms) time (ms) time (ms)
P2 0 0
P3 1 88 8 7
P4 2 15
3 4 12 11 9
9 26 24
5 17 14
Average Waiting Time : (0+7+15+9)/4 = 7.75
30 | P a g e
3.3.5 Shortest Remaining Time Scheduling (SRT)
It is Pre-emptive algorithm which depend on priority demand on Arrival Time then
followed by shortest burst time. It is also known as Shortest Job First Pre-emptive.
Example
Consider the following set of processes for Shortest Job First Preemptive
- Show the result in Gantt Chat diagram
- Count the average waiting time.
Process Arrival time Burst Time
(ms) (ms)
P1 0 7
P2 2 4
P3 4 1
P4 5 4
Answer
P1 P2 P3 P2 P4 P1
02 4 5 7 11 16
Turnaround time = Completion time – arrival time
Waiting time = Turnaround time – burst time
31 | P a g e
Process Arrival Burst time Completion Turnaround Waiting
time (ms) time (ms)
P1 (ms) time (ms) time (ms)
P2 0 9
P3 2 7 16 16 1
P4 4 0
5 47 5 2
15 1
4 11 6
Average Waiting Time: (9+1+0+2)/4 = 3ms
3.3.6 Multilevel Queue Scheduling Algorithm / Multilevel Feedback
Queue Scheduling Algorithm
The multi-level queue scheduling algorithm divides the ready queue into multiple
queues. Processes are assigned to a single queue indefinitely, usually based on some
process attribute such as memory size, process priority, or process type. Processes do
not transition between queues. Each queue has its own scheduling algorithm, but this
arrangement has the advantage of low scheduling overhead, but it is not flexible.
While Multilevel - feedback queue scheduling allows a process to move between
queues in order to separate processes with different CPU-burst characteristics. When
a process consumes more CPU time, it is relegated to a lower-priority queue. A process
that waits in a lower-priority queue for too long may be moved to a higher-priority
queue. This type of ageing keeps you from going hungry.
32 | P a g e
33 | P a g e
DEADLOCK PART O4
PART 3
DEADLOCK
• Definition
• Necessary Condition
• Deadlock Solution
34 | P a g e
O4
DEADLOCK
35 | P a g e
4.1 DEADLOCK
4.1.1 Deadlock Definition
A process in an operating system asks, uses, and releases resources. One or more
processes are stuck in a deadlock because they are waiting for another process to
acquire a resource. In Figure 3.1, multiple cars are locked together and cannot move
because they are in front of each other. Similar situations arise in operating systems
when two or more processes share resources and wait for others to release them.
Figure 4.1 : Example of deadlock situation
Figure 4.2 : Deadlock explanation
36 | P a g e
An explanation of deadlock can be illustrated by Figure 4.2. Process 1 is holding
resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is
waiting for resource 1.
Explanation of deadlock: https://youtu.be/onkWXaXAgbY
4.1.2 Necessary Condition
Deadlock can occur if all four of the following conditions are met at the same time:
1. Mutual Exclusion: One or more resources are not shareable (Only one
process can use at a time)
2. Hold and Wait: A process holds at least one resource while waiting for it to
be released.
3. No Pre-emption: A resource cannot be taken from a process until the process
has released it.
4. Circular Wait: A group of processes that are waiting for each other in a
circular fashion.
4.1.3 Deadlock Solution
a. Non-blocking synchronization algorithm - By removing the mutual exclusion
requirement, no process can have exclusive access to a resource. For non-
spooled resources, or even spooled resources, a stalemate may arise.
b. Non-blocking synchronisation algorithm - By eliminating the mutual
exclusion constraint, no process can have exclusive access to a resource. For
non-spooled resources, or even spooled resources, a stalemate may arise.
c. Non-blocking synchronisation algorithm - By eliminating mutual exclusion,
no process can access a resource exclusively. For non-spooled resources, or
even spooled resources, a stalemate may arise.
37 | P a g e
d. Non-blocking synchronisation algorithm - Without mutual exclusion, no
process can have exclusive access to a resource. For non-spooled resources,
or even spooled resources, a stalemate may arise.
38 | P a g e
TUTORIAL PART O4
QUESTION & ANSWER
39 | P a g e
O4
TUTORIAL
40 | P a g e