POLITEKNIK SULTAN IDRIS SHAH
KEMENTERIAN PENGAJIAN TINGGI MALAYSIA
Copyright © 2021
All rights reserved. This book or any portion thereof may not be reproduced in any manner
whatsoever including electronic, mechanical, photocopying, recording, etc. without the
express written permission of the Author and Publisher of Politeknik Sultan Idris Shah.
Politeknik Sultan Idris Shah
Sg. Lang, 45100 Sg Air Tawar
Selangor
03 3280 6200
03 3280 6400
http://www.psis.edu.my
SYNOPSIS
This book explains a quick overview about resource management used in operating
system with format that students can quickly grasp. This text introduces the subject
concisely, describing the complexities of operating systems without going into
intricate detail.
System resources such as Central Processing Units (CPUs), random access memory,
secondary storage devices, external devices and so on are allocated to the processes,
networks and applications are known as resource management in most operating
system.
To understand resources management in OS, we need to understand process
management and CPU scheduling and relationship between these two. Process
management involves various tasks like creation, scheduling, termination of
processes, and a dead lock. The objective of the CPU scheduling is basically performed
to reduce the waiting time and encourage multiprogramming by using CPU scheduling
algorithm.
So that, users can open many applications at one time and to fulfil the role of modern
computers today.
To our Head of Department, Mr Hairulanuar Rosman,
To our Programme Coordinator, Mr Mohd Farhan U’zair Paisan,
To our mentor, Mrs Zainora Kamal Ludin.
Thank you for your encouragement and support.
CONTENT
PART 01
INTRODUCTION
PART 02
PROCESS MANAGEMENT
PART 03
CPU SCHEDULING
PART 04
DEADLOCK
PART 05
EXERCISES
TABLE OF CONTENT
26
PART 1 PART 2
INTRODUCTION PROCESS MANAGEMENT
Process Concept
• Operating system
• Operating System as a • The process
• Process Control Block
resource manager • Process state
Overview of Resource Management Process Scheduling
• Process scheduling
18 • CPU scheduling
PART 3 Policies
CPU SCHEDULING • Scheduling queue
Basic Concept • CPU scheduling
• Types of CPU
• CPU–I/O Burst Cycle
• CPU Scheduler scheduling
• Types of CPU
34
Scheduler
• Dispatcher PART 4
CPU Scheduling Criteria DEADLOCK
CPU Scheduling Algorithm
• FIFO • Definition
• RR • Necessary Condition
• Priority • Deadlock Solution
• SJF
• SRT 40
• Multilevel queue
• Multilevel-feedback PART 5
EXERCISES
queue
INTRODUCTION PART O1
Introduction
• Operating system
• Operating System as a resource
manager
Overview of Resource Management
1|Page
O1
INTRODUCTION
2|Page
1.1 INTRODUCTION
Before we go further into resource management topic, we need to understand the
function of operating system itself.
1.1.1 Operating System
In general, an operating system can be defined as a program or interface that allows
an electronic devices or hardware to communicate with human or user. It provides
friendly interface so that the user can use the electronic devices or hardware
efficiently. Example of the operating system such as android and windows iOS.
Basically, an operating system is a software that control and manages a computer’s
hardware. It also provides a platform and basis for application programs and acts as
an intermediary between the computer user and the computer hardware.
1.1.2 Operating System as a Resource Manager
An operating system operates by managing all the process of the resources such as
memory information, input output devices, processor and file so that the electronic
devices can work in excellent manner.
The operating system acts as the resource manager for the resources and allocates
them to specific programs to complete the task as necessary.
1.2 OVERVIEW OF RESOURCE MANAGEMENT
Resource management is the process in all operating systems in which system
resources (such as random-access memory, Central Processing Unit (CPU), secondary
storage devices, external devices and so on) is assigned to specific processes, threads,
and applications. This is usually done to achieve high throughput, quality of service,
fairness and balance between all processes. To complete the task, several scheduling
algorithms is needed to the processes and share the system resources equally as
required. This scheduling level task is the basic requirement for the systems to
performed multitasking and multiplexing.
3|Page
To complete a task, an efficient process management is needed to manage and
control all the execution process inside CPU. Program in execution is called a process
and it must go through several process state until it complete. The rotation of the
process to be executed in CPU is determined by CPU Scheduling by using suitable
algorithm. These algorithms will be discussed in part 03 in this book. Relationship of
the process management and CPU scheduling can be summarized in Figure 1.1.
Figure 1.1 : Overview of resource management in operating system
4|Page
PROCESS MANAGEMENT PART O2
Process Concept
• The process
• Process Control Block
• Process state
Process Scheduling
• Process scheduling
• CPU scheduling Policies
• Scheduling queue
• CPU scheduling
• Types of CPU scheduling
Summary
5|Page
O2
PROCESS
MANAGEMENT
6|Page
2.1 PROCESS CONCEPT
Process is a program in execution and process execution must complete in sequential
manner. The operation of the process is controlled with the help of Process Control
Block (PCB) and need particular resources such as CPU time, memory, files, and I/O
devices to complete the required task. These resources are typically allocated to the
process while it is executing.
In operating system, processes may be in one of 5 states. During changing state, an
interrupt can be occurred.
In modern operating systems, most of the processes must running all time to
maximize the CPU utilization. This is called multiprogramming, and this is the role of
process scheduler.
2.1.1 The Process
A process is an example of a program in execution. It includes current activity, as
represented by contents of the processor register and program counter values.
Process generally also include process stacks, which contain temporary data (such as
return addresses, local variables and function parameters), and data sections, which
contain global variables.
Process architecture is divided into four sections as shown in Figure 1.1.
• The text section - consists of current activity, which is represented by the
value of the Program Counter.
• The data section - contains the process variable.
• The heap section - used for dynamic memory allocation, which may be
processed during its run time.
• The stack section - stores temporary data like function parameters, returns
addresses, and local variables.
7|Page
Figure 2.1 : Process architecture
2.1.2 Process Control Block (PCB)
PCB stands for Process Control Block. It is a data structure that is used by the Operating
System to keep all the information for every process. An integer Process ID (PID) is as
identity of the PCB. This identity is used to store all the information of the process
required, so that all the running process can be trace. PCB can be illustrated in Figure
1.2.
• Process State – New, ready, running, waiting, terminated
• Process ID/number – identity of the PCB.
• CPU registers and Program Counter - These need to be saved and restored
when swapping processes in and out of the CPU.
• CPU-Scheduling information - Such as priority information and pointers to
scheduling queues.
• Memory-Management information - page tables or segment tables.
• Accounting information - user and kernel CPU time consumed, account
numbers, limits, etc.
• I/O Status information - Devices allocated, open file tables, etc.
8|Page
Figure 2.2 : Process Control Block (PCB)
2.1.3 Process State
The state changes as a process executes. The state of process can be defined as
current activity of the process. 5 states can be happened as follows:
• New - the process is being created
• Ready - the process is waiting to be assign to a CPU
• Running - instructions are being executed
• Waiting - the process is waiting for some event to occur (such as I/O
completion or signal reception)
• Terminated - the process execution has completed
During process changing state, one important role can be occurred which is interrupt.
Interrupts are electronic signals sent to the CPU by external devices, normally I/O
devices. They tell the CPU to stop its current activity and execute the other requested
system parts. The activities of process state can be illustrated as Figure 2.3
9|Page
Figure 2.3 : Process state
2.2 PROCESS SCHEDULING
The objective of multiprogramming is to have some process running at all times to
maximize the CPU operation. To meet these objectives, the process scheduler is
needed to select an available process for program execution on a core. The process
scheduler must meet these objectives by implementing suitable policies for swapping
processes in and out of the CPU.
Each CPU core can run one process at a time. For a system with a single CPU core,
there will never be more than one process running at a time for a system with a single
core while in a multicore system can run multiple processes at one time. The excess
running process must wait until the core is free and rescheduled if there are more
processes than the core. The number of processes currently in memory is known as
the degree of multiprogramming.
10 | P a g e
2.2.1 Process Scheduling Policies
In a multiprogramming ecosystem, there are usually many jobs can be executed than
could possibly be run at one time. Before the operating system can schedule them, it
needs to resolve three limitations of the system which is:
1. there are a finite number of resources (such as printers, disk drives, and tape
drives);
2. some resources, once they’re allocated, can’t be shared with another job
(such as printers);
3. some resources require operator intervention—that is, they can’t be
reassigned automatically from job to job (such as tape drives).
Figure 2.4 : Process Scheduling Policies
11 | P a g e
2.2.2 Scheduling Queues
The Operating System keeps all Process Control Block (PCBs) in Process Scheduling
Queues. A separate queue for each of the process states is maintain by OS. A PCBs
with a process in the same execution state are placed in the same queue. PCB will
unlink from its current queue and moved to its new state queue when the state of a
process is changed.
The Operating System maintains the following important process scheduling queues
as follows:
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
• Device queues − The processes which are blocked due to unavailability of an
input/output device represent this queue
Figure 2.5 : Scheduling queue
As processes enter the system, it will reside in ready queue, where they are ready and
waiting to be executed on a CPU’s core. This queue is generally stored as a linked list;
a ready-queue header contains pointers to the first PCB in the list, and each PCB
includes a pointer field that points to the next PCB in the ready queue. The system
also includes other queues. When a process is allocated a CPU core, it executes for a
while and eventually terminates, is interrupted, or waits for the occurrence of a
12 | P a g e
particular event, such as the completion of an I/O request. Suppose the process makes
an I/O request to a device such as a disk. Since devices run significantly slower than
processors, the process will have to wait for the I/O to become available. Processes
that are waiting for a certain event to occur — such as completion of I/O — are placed
in a wait queue
Figure 2.6 : Ready queue and wait queue
A common representation of process scheduling is a queueing diagram, such as that
in Figure 2.7. Two types of queues are present: the ready queue and a set of wait
queues. The circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system. A new process is initially put in the ready
queue. It waits there until it is selected for execution or dispatched. Once the process
is allocated a CPU core and is executing, one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O wait
queue.
• The process could create a new child process and then be placed in a wait
13 | P a g e
queue while it awaits the child’s termination.
• The process could be removed forcibly from the core, because of an
interrupt or having its time slice expire and be put back in the ready queue.
• In the first two cases, the process eventually switches from the waiting state
to the ready state and is then put back in the ready queue. A process continues
this cycle until it terminates, at which time it is removed from all queues and
has its PCB and resources deallocated.
Figure 2.7 : Queueing-diagram representation of process scheduling.
2.2.3 CPU Scheduling
Process selection in ready queue and assign to a CPU core is the role of the CPU
scheduler. The new process must be selected by the CPU scheduler frequently the
CPU. An input output bound process may execute for only a few milliseconds before
waiting for an input output request. Although input output bound process will require
a CPU core for longer periods, the scheduler is doubtful to allow the core to a process
in an expanded period. It will by force remove the CPU from a process and reschedule
another process to run. Therefore, the CPU scheduler executes at least once every 100
milliseconds.
14 | P a g e
Some operating systems have a swapping method, where a process can be “swapped
out” from memory to disk and it saves its current status. After that the process
restored by “swapped in” from disk to memory back and its execution can be
continued where it left off. By removing a process from memory temporarily it
advantageously reduces the degree of multiprogramming.
2.2.4 Types of Process Schedulers
Process scheduling is handled by a special software called a scheduler. There are three
types of Process Schedulers which are long term schedulers, Short Term schedulers
and Medium-Term schedulers. Below is a simple explanation for these types of
schedulers and their differentiation at Table 1.
15 | P a g e
Table 1 : Differences between types of process schedulers
16 | P a g e
CPU SCHEDULING PART O3
Basic Concept
• CPU–I/O Burst Cycle
• CPU Scheduler
• Types of CPU Scheduler
• Dispatcher
CPU Scheduling Criteria
CPU Scheduling Algorithm
• FIFO
• RR
• Priority
• SJF
• SRT
• Multilevel queue
Summary
17 | P a g e
O3
CPU
SCHEDULING
18 | P a g e
3.1 BASIC CONCEPT
In operating system with a single CPU core, only one process can run at a time.
Another process must wait until the CPU’s core is free and rescheduled. CPU
scheduling is the foundation of multiprogrammed operating systems. By switching the
CPU between processes, the operating system can make the computer more practical.
In this chapter, basic CPU-scheduling concepts and the operation of CPU scheduling
algorithms were introduced.
With multiprogramming situation, several processes are kept in memory at one time.
When one process has to wait, the operating system takes the CPU away from that
process and gives the CPU to another process. On a multicore system, this concept of
keeping the CPU busy is extended to all processing cores on the system. Scheduling of
this kind is a fundamental in modern operating-system with almost all computer
resources are scheduled before use. CPU is a major computer resource, and its
scheduling is important as it is central of OS design.
3.1.1 CPU–Input/Output Burst Cycle
The successful of CPU scheduling depends on a observed property of processes. The
process execution consists of a cycle of CPU execution and Input/output wait by
alternate between these two states. Process execution will begin with a CPU burst,
followed by an Input/Output burst, then another CPU burst, then another
Input/output burst, and so on. Finally, the final CPU burst ends with a system request
to terminate the execution.
19 | P a g e
Figure 3.1 : CPU–Input/output Burst Cycle
3.1.2 CPU Scheduler
Whenever the CPU becomes idle, the CPU scheduler will select a process in ready
queue in memory to allocates them into CPU by using suitable CPU scheduling
algorithm.
20 | P a g e
3.1.3 Types of CPU scheduling
The decision of CPU scheduling results may occur under the following four situation:
1. When a process changes from a running state to a waiting state (for an
input/output request or a waiting application for termination of one of the
child processes).
2. When a process switches from the running state to the ready state (for
example, when an interrupt occurs).
3. When a process switches from the waiting state to the ready state (for
example, completion of input/output).
4. When a process terminates/completes
There is no choice in terms of scheduling for situations 1 and 4 and new process must
be selected for execution. However, there is a choice in situations 2 and 3.
When Scheduling takes place only under situations 1 and 4, this scheduling scheme is
known as non-pre-emptive; otherwise, the scheduling scheme is pre-emptive. These
types of CPU scheduling can be clearly illustrated in Figure2.3 (Process state)
3.1.4 Differences between pre-emptive and non-pre-emptive
Table 2 shows the differences between pre-emptive and non pre-emptive scheduling
Table 2 : Pre-emptive vs non pre-emptive CPU Scheduling
21 | P a g e
3.1.5 Dispatcher
Another component concerned in the CPU scheduling function is the Dispatcher. A
dispatcher is a module that gives CPU control to a process selected by a short -term
scheduler. This function involves:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that program
from where it left last time.
The dispatcher should be as fast as possible, as it is used during each process switch.
The time taken by a dispatcher to stop one process and start another process is
known as Dispatch Latency. Dispatch Latency can be explained using the diagram
below:
Figure 3.2 : Dispatch latency
22 | P a g e
Scheduler and Dispatcher are associated with process scheduling of an operating
system. The key difference between scheduler and dispatcher is that the scheduler
selects a process out of several processes to be executed while the dispatcher
allocates the CPU for the selected process by the scheduler.
Schedulers and Dispatchers are related with the process scheduling of operating
system. The main difference between a scheduler and a dispatcher is the scheduler
selects a process among processes to execute while the dispatcher allocates the CPU
for the process selected by the scheduler.
3.2 CPU SCHEDULING CRITERIA
Five criteria that were involve in scheduling algorithm as follows:
Figure 3.3 : CPU scheduling criteria
23 | P a g e
3.3 CPU SCHEDULING ALGORITHM
To decide which process to execute first and which process to execute last to achieve
maximum CPU utilization, computer scientists have defined some algorithms, they
are:
Figure 3.4 : CPU scheduling algorithm
24 | P a g e
3.3.1 First In First Out Scheduling (FIFO)
It is the simple and easiest algorithm. In this type of algorithm, the process which
requests by the CPU will gets the CPU allocation first. This scheduling method can be
managed with a FIFO queue.
Example
Consider the following set of processes.
- Show the result in Gantt Chat diagram
- Count the average waiting time
Processes are assumed has arrive in order.
Process Burst Time
(ms)
P1 24
P2 3
P3 3
Answer
P1 P2 P3
0 24 27 30
Waiting time each process = total waiting time process – burst time
P1 : 24 - 24 = 0
P2 : 27 - 3 = 24
P3 : 30 – 3 = 27
The waiting time is 0 milisecond for the process P1, 24 miliseconds for P2, and 27
milisconds for P3.
Average waiting time (AWT): (0+24+27)/3 = 17.
25 | P a g e
3.3.2 Round Robin Scheduling (RR)
Round Robin (RR) scheduling algorithm is generally designed for time-sharing systems.
This algorithm is like FCFS scheduling, but in Round Robin (RR) scheduling, pre-
emption is added which enables the system to switch another process. In RR
scheduling:
• A fixed time is allocated to each process, called a quantum.
• Once a process is executed for the given period, the process is pre-empted,
and another process will be executing for the given time.
• Context switching is used to save states of pre-empted processes.
• This algorithm is simple and easy to implement as it is starvation-free so that
all the processes get a fair share of CPU.
• Generally, length of time quantum is from 10 to 100 milliseconds in length.
Example
Consider the following set of processes.
- Show the result in Gantt Chat diagram
- Count the average waiting time. Given quantum time = 4
Processes are assumed has arrive in order.
Process Burst Time
(ms)
P1 24
P2 3
P3 3
26 | P a g e
Answer
The process P1 get the first 4 milisecond. Since P1 requires another 20 miliseconds, it
is preempted after the first quantum. The CPU is given to next process in the queue
(P2). P2 quits before time quantum expires because P2 only needs 3 miliseconds.
Then the CPU is given to next process in queue (P3).
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
P1 : 30 -24 = 6
P2 : 7 -3 = 4
P3 : 10 - 3 =7
Average waiting time (AWT) = (6+4+7)/3 = 5.6ms
3.3.3 Priority Scheduling
A priority is related with each process, and the CPU is allocated to the process with
the highest priority. Equal-priority processes are scheduled in FCFS/FIFO order.
However, there is no general agreement on whether 0 is the highest or lowest
priority. Some system uses low numbers to represent low priority and vice versa
Example
Consider the following set of processes.
- Show the result in Gantt Chat diagram
- Count the average waiting time.
Processes are assumed has arrive in order.
27 | P a g e
Process Burst Time Priority
(ms)
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Answer
P2 P5 P1 P3 P4
16 18 19
01 6
P1 : 16 – 10 = 6
P2 : 1 – 1 = 0
P3 : 18 – 2 =16
P4 : 19 – 1 = 18
P5 : 6 – 5 = 1
Average waiting time (AWT) = (6+0+16+18+1)/5 = 8.2 ms
3.3.4 Shortest Job First Scheduling (SJF)
Shortest Job First scheduling works on the process with the shortest burst time or
duration first. It can be happen in 2 types:
1. Non Pre-emptive - which it take from CPU until it release, See Arrival Time
first then allocate shortest burst time
28 | P a g e
2. Pre-emptive - Depend on priority demand on Arrival Time then followed by
shortest burst time
Example
Consider the following set of processes for Shortest Job First
- Show the result in Gantt Chat diagram
- Count the average waiting time.
Process Arrival time Burst Time
(ms) (ms)
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Answer P2 P4 P3 26
P1 8 12 17
0
Turnaround time = Completion time – arrival time
Waiting time = Turnaround time – burst time
29 | P a g e
Process Arrival Burst time Completion Turnaround Waiting
time (ms) time (ms)
P1 (ms) time (ms) time (ms)
P2 0 0
P3 1 88 8 7
P4 2 15
3 4 12 11 9
9 26 24
5 17 14
Average Waiting Time : (0+7+15+9)/4 = 7.75
3.3.5 Shortest Remaining Time Scheduling (SRT)
It is Pre-emptive algorithm which depend on priority demand on Arrival Time then
followed by shortest burst time. It is also known as Shortest Job First Pre-emptive.
Example
Consider the following set of processes for Shortest Job First Preemptive
- Show the result in Gantt Chat diagram
- Count the average waiting time.
Process Arrival time Burst Time
(ms) (ms)
P1 0 7
P2 2 4
P3 4 1
P4 5 4
30 | P a g e
Answer
P1 P2 P3 P2 P4 P1
02 4 5 7 11 16
Turnaround time = Completion time – arrival time
Waiting time = Turnaround time – burst time
Process Arrival Burst time Completion Turnaround Waiting
time (ms) time (ms)
P1 (ms) time (ms) time (ms)
P2 0 9
P3 2 7 16 16 1
P4 4 0
5 47 5 2
15 1
4 11 6
Average Waiting Time: (9+1+0+2)/4 = 3ms
3.3.6 Multilevel Queue Scheduling Algorithm / Multilevel Feedback
Queue Scheduling Algorithm
The multi - level queue scheduling algorithm divides the ready queue into several
separate queues. Processes are permanently assigned to a single queue, generally
based on some process attribute, such as memory size, process priority or process
type. Processes do not move between queues. Every queue has its own scheduling
algorithm, but this arrangement has the advantage of low scheduling overhead, but
the disadvantage is it not flexible.
31 | P a g e
While Multilevel - feedback queue scheduling permits a process to move between
queues to separate processes with different CPU-burst characteristics. If a process
uses longer CPU time, it will be moved to a lower-priority queue. A process that waits
too long in a lower-priority queue may be moved to a higher-priority queue. This form
of aging prevents starvation.
32 | P a g e
DEADLOCK PART O4
PART 3
DEADLOCK
• Definition
• Necessary Condition
• Deadlock Solution
Summary
33 | P a g e
O4
DEADLOCK
34 | P a g e
4.1 DEADLOCK
4.1.1 Deadlock Definition
A process in operating system uses resources by requests a resource, use the resource,
and then releases the resource. Deadlock is a condition where a set of processes are
blocked because each process is holding a resource and waiting for another resource
acquired by some other process. Example in Figure 3.1, several cars are lock with each
other and none of the cars can move as they are in front of each other. Similar
situation can be occurring in operating systems when there are two or more processes
that hold some resources and wait for resources which held by others.
Figure 4.1 : Example of deadlock situation
Figure 4.2 : Deadlock explanation
35 | P a g e
An explanation of deadlock can be illustrated by Figure 3.2 . Process 1 is holding
resource 1 and waiting for resource 2 which is acquired by process 2, and process 2
is waiting for resource 1.
Explanation deadlock : https://youtu.be/onkWXaXAgbY
4.1.2 Necessary Condition
Deadlock can occur if the following four conditions hold at the same time:
1. Mutual Exclusion: One or more than one resource is non-shareable (Only one
process can use at a time)
2. Hold and Wait: A process is holding at least one resource and waiting for that
resources.
3. No Pre-emption: A resource cannot be taken from a process unless the process
has releases the resource.
4. Circular Wait: A set of processes are waiting for each other in circular form.
4.1.3 Deadlock Solution
• Non blocking syncronization algorithm - Removing the mutual exclusion
condition means that no process may have exclusive access to a resource. This
proves impossible for resources that cannot be spooled, and even with spooled
resources deadlock could still occur.
• Non-blocking synchronization algorithm - Removing the mutual exclusion
condition means that no process may have exclusive access to a resource. This
proves impossible for resources that cannot be spooled, and even with spooled
resources deadlock could still occur.
36 | P a g e
• Non-blocking synchronization algorithm - Removing the mutual exclusion
condition means that no process may have exclusive access to a resource. This
proves impossible for resources that cannot be spooled, and even with spooled
resources deadlock could still occur.
• Non-blocking synchronization algorithm - Removing the mutual exclusion
condition means that no process may have exclusive access to a resource. This
proves impossible for resources that cannot be spooled, and even with spooled
resources deadlock could still occur.
37 | P a g e
38 | P a g e
EXERCISE PART O4
QUESTION & ANSWER
39 | P a g e
O4
EXECISES
40 | P a g e
QUESTION
1. What is the function of Operating system?
2. How Operating system can relate to Resources Management.
3. What is the meaning of Resources Management?
4. Explain the process management
5. What is the role of interrupt?
6. Why are interrupts important?
7. What is interrupt example?
8. Explain how interrupt works
9. What information inside Process Control Block (PCB)
10. Explain the function of Process Control Block (PCB)
11. A process can be terminated due to normal exit state. How can this state
happen?
12. What is the process of ready state?
13. What do you mean by a process?
14. What are the different states of a process?
15. What is the advantage of a multiprocessor system?
16. How many the necessary and sufficient conditions behind the deadlock?
Explain each.
17. What is deadlock? Explain
18. The address of the next instruction to be executed for the current process is
stored in?
19. What state of process defined “The process is waiting to be assigned to a
processor”
41 | P a g e