The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

OPERATING SYSTEM introduces the design and implementation of operating systems. This course will cover briefly the evolution and major components of operating system. Particular emphasis will be given to three major OS subsystems; memory management, processes management, file systems and operating systems in mobile devices today that supporting distributed systems.

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by rasmaliza.pkt, 2021-09-19 05:33:53

INTRODUCTION TO OPERATING SYSTEM

OPERATING SYSTEM introduces the design and implementation of operating systems. This course will cover briefly the evolution and major components of operating system. Particular emphasis will be given to three major OS subsystems; memory management, processes management, file systems and operating systems in mobile devices today that supporting distributed systems.

1.1 Operating System Environment

System Call vs Application Programming
Interface

Examples : APIs for file system, graphics user
interface, networking

Win32 API -
Windows

POSIX API

 based systems (including
virtually all versions of UNIX,
Linux, and Mac OS X)

43 43

1.1 Operating System Environment

System Call vs Application Programming
Interface

JAVA API

 Java virtual machine (JVM)

44 44



CHAPTER 2

45

CHAPTER 2

This topic covers the primary resources of an operating
system, covering the organization of memory
management physically and virtually. Discussion on
process states, scheduling, interrupts, multi-threading
and deadlocks are also covered.

Lesson Learning Outcome :
At the end of this chapter, students should be able to:
• Explain memory management of operating

system
• Explain process management of operating system

file management in operating system
• Explain a Deadlock Situation in an Operating

System

46

2.1 Memory Management
2.2 Process Management
2.3 Deadlock

47

Function of memory Routines
manager
Memory swapping
Characteristics of
memory manager Fixed partition
memory
Memory management
strategies

Memory management
strategies

48

2.1 Memory Management

What is Memory Management?

Is the act of managing
computer memory

Providing ways to allocate
portions of memory to programs
at their request, and freeing it for
reuse when no longer needed

Strategies for obtaining
optimal memory
performance

49 5

2.1 Memory Management

Hierarchy of Memory

• The memory hierarchy in computer storage
describe each level in the hierarchy by response
time.

• Since response time, complexity, and capacity are
related, the levels may also be distinguished by
their performance and controlling technologies.

• There are four major storage
levels.
a. Internal – Processor
registers and cache.
b. Main – the system
RAM and controller
cards.
c. On-line mass storage –
Secondary storage.
d. Off-line bulk storage –
Tertiary and Off-line
storage.

50 6

2.1 Memory Management

Diagram of Hierarchy of Memory

51 7

2.1 Memory Management

Memory Management Strategies

3 Types Of Memory Management Strategies

a. Fetch strategy
• Demand or anticipatory
• Decides which piece of data
to load next

b. Placement strategy
• Decides where in main
memory to place
incoming data using
THREE methods

 First Fit
 Best Fit
 Worst Fit

c. Replacement strategy
• Decides which data to
remove from main
memory to make more
space

52 8

2.1 Memory Management
> Memory Management Strategies

Placement Strategy

First Fit Best Fit

• Allocate the first hole • Allocate the smallest
that is big enough hole that is big
enough; must search
• A placement strategy entire list, unless
which selects the ordered by size
first space on the
free list which is • Produces the
large enough smallest leftover
hole
Worst Fit

• Allocate the largest
hole; must also
search entire list

• Produces the largest
leftover hole

• Decides where in
main memory to
place incoming data
using three methods

53 9

2.1 Memory Management
> Memory Management Strategies

Placement Strategies
Example: First Fit

• Total of memory : 1024 MB
• Memory divided to 5 block in MB : B1 (100), B2 (170),

B3 (250), B4 (300), B5 (200)
• Queue of Process : P1 = 120, P2 = 340, P3 = 230, P4

= 400, P5 = 300, P6 = 290, P7 = 170, P8 = 200, P9
= 90, P10 = 10
• After the first fit placement, the memory should be
like this

• HOLD Process : P2, P4, P6, P8 and P10

54 10

2.1 Memory Management
> Memory Management Strategies

Placement Strategies
Example: Best Fit

• Total of memory : 1024 MB
• Memory divided to 5 block in MB : B1 (100), B2

(170), B3 (250), B4 (300), B5 (200)
• Queue of Process : P1 = 120, P2 = 230, P3 = 190,

P4 = 300, P5 = 190, P6 = 90
• After the best fit placement, the memory should

be like this

• HOLD Process : P5

55 11

2.1 Memory Management
> Memory Management Strategies

Placement Strategies
Example: Worst Fit

• Given 5 memory partition of 150KB, 300KB, 260KB,
100KB

• How would each of worst fit algorithm place
process of P1=100KB, P2=120KB, P3=90KB,

56 12

2.1 Memory Management

What is Routines?

• A section of a program that performs a particular task.
• Programs consist of modules : each of modules

contains one or more routines.
• The term routine is synonymous with procedure,

function, and subroutine.
• Any sequence of code that is intended to be called and

used repeatedly during the executable of a programs
• Generally, the resident operating system occupies low

memory. The remaining memory, called the transient
area, is where application programs and transient
operating system routines are loaded.

57 13

2.1 Memory Management Routines

>

Types of Routines

 Resident

• Any computer routine which is stored permanently in
the memory

• The modules/routines must remain in memory at all
times.

• Instructions and data that remain in memory can be
accessed instantly.

• Contains a command processor, or shell, and
input/output control system (IOCS), a file system, and
interrupt handler routines

 Transient

• A routine that is loaded at run time.
• Stored on disk and read into memory only when

needed
• Containing all the space not allocated to the resident

operating system, can hold one of these transient
modules or an application program

58 14

2.1 Memory Management

Memory Swapping

What is SWAPPING?
• A technique to replace block of data (pages /

segments) in memory.
• Used if memory is no longer sufficient of main memory
• Simple memory/process management technique used

by the operating system to increase the utilization of
the processor by moving some blocked process from
the main memory to the secondary memory
• Swap memory may slow down your computer
performance

Schematic View of Swapping 15
59

2.1 Memory Management

Definition

 Each active process receives a fixed-size block of
memory

 Processor rapidly switches between each process
 Memory divided in to ‘N’ partitions at a boot time
 ‘N’ can be chosen by the OS Scheduler
 When processes arrives, it is put into the queue for the

smallest partition it will fit into
 Any left over space in partition is wasted

(fragmentation)
 Partition sizes are fixed until rebooted

60 16

2.1 Memory Management

Disadvantage

 Memory waste under fixed-partition multiprogramming.
 Internal fragmentation - process does not take up entire

partition, wasting of memory storage
 Only one job per partition
 Some partitions not used

61 17

2.1 Memory Management

What is Virtual Memory

 The term "virtual memory" refers to : store memory
addresses on a hard drive.

 This technique makes more memory available to
programs because the hard drive space is translated
into real random-access memory (RAM).

 Solves problem of limited memory space
 The area of the hard disk that stores the RAM image

is called a page and segment file.
 It holds page and segment of RAM on the hard disk,

and the operating system moves data back and forth
between the page file and RAM

62 18

2.1 Memory Management

Virtual Memory Implementation

63 19

2.1 Memory Management

Types of Virtual Memory
a. Demand Paging

 Virtual memory is divided into fixed-size blocks called
pages

 Demand paging follows that pages should only be
brought into memory if the executing process
demands them.

 This is often referred to as lazy evaluation as only
those pages demanded by the process are swapped
from secondary storage to main memory

b. Demand Segmentation

 The most efficient virtual memory system but
implementation is very complicated (hardware)

 Virtual memory is divided into variable length regions
 Memory allocation is a dynamic process that uses a

best fit or first fit algorithm

64 20

2.1 Memory Management

Paging vs. Segmentation

Paging • Block replacement easy
• Fixed-length blocks
• No need to calculate address
• No external fragmentation
• Units of data are broken up into

separate pages

Segmentation • Block replacement hard
• Variable-length blocks
• Need to calculate address
• No internal fragmentation
• Keeps blocks of data as single

units

65 21

Process States CPU Scheduler

Process Life Cycle Pre-emptive and
How Scheduling Non-Pre-emptive
Process Worked?
Role of Interrupt Scheduling
Algorithms

Threads

66

2.2 Process Management

Types of Process States

a. Created / New
b. Ready
c. Running
d. Blocked / Waiting

e. Terminated / Completed

67 23

2.2 Process Management

Types of Process States

 Process is being created.
 When a process is first created, it

occupies the "created" or "new" state.
 In this state, the process awaits

admission to the "ready" state.

 Process is waiting to be assigned to
the processor.

 A "ready" or "waiting" process has
been loaded into main memory and is
awaiting execution on a CPU (to be
context switched onto the CPU by the
dispatcher, or short-term scheduler).

 When instructions related to that
process are executed by CPU

 A process moves into the running state
when it is chosen for execution.

 The process's instructions are executed
by one of the CPUs (or cores) of the
system. There is at most one running
process per CPU or core.

68 24

2.2 Process Management

Types of Process States

 Process is waiting for a resource
to become available or for some
event to occur.

 Process has finished execution and
released resources that it was using.

 A process may be terminated, either
from the "running" state by
completing its execution or by
explicitly being killed. In either of
these cases, the process moves to
the "terminated" state.

69 25

2.2 Process Management

Types of Process Life Cycle

a. Running State

 The process is executing on a processor

b. Ready State

 The process could execute on a processor if
one were available

c. Blocked State

 The process is waiting for some event to happen
before it can proceed

70 26

2.2 Process Management

How Scheduling Process Worked?

When to schedule?

When a process exits.
When a process blocks on I/O.

Processor Scheduling Policy

• Decides which process runs at given time
• Different schedulers will have different goals

a. Maximize throughput - the number of
processes completed per unit time

b. Minimize latency - the time between work
becoming enabled and its subsequent
completion

c. Prevent indefinite postponement
d. Complete process by given deadline
e. Maximize processor utilization - keep the CPU

busy.

71 27

2.2 Process Management

Role of Interrupt

What is interrupt?

 A signals sent to the CPU by external
devices, normally I/O devices

 They tell the CPU to stop its current
activities and execute the appropriate
part of the operating system

Types of Interrupts  Software
Interrupts : are
 Hardware generated by
Interrupts : are programs when
generated by they want to
hardware devices request a
to signal that they system call to be
need some performed by
attention from the the operating
OS. They may have system
just received some
data (e.g.,
keystrokes on the
keyboard)

72 28

2.2 Process Management

Whenever the CPU becomes idle,
it is the job of the CPU Scheduler
(a.k.a. the short-term scheduler ) to select
another process from the ready queue to
run next. The storage structure for the ready
queue and the algorithm used to select the
next process.

Types of CPU Scheduler

i. Long term scheduling
ii. Medium term scheduling
iii. Short term scheduling

73 29

2.2 Process Management

Long Term Scheduling (Job Scheduler)

 It selects processes from the
queue and loads them into
memory for execution.

 Process loads into the
memory for CPU scheduling.

 Needed only in the case of
batch processing and is
absent in multi user time-
sharing system.

Medium Term Scheduling (Swapper)

 It removes the processes
from the memory
(handling the swapped
out-processes)

 The process is swapped
out, and is later swapped
in, by the medium-term
scheduler.

74 30

2.2 Process Management

Short Term Scheduling (CPU Scheduler)

 CPU scheduler selects a process among the processes
that are ready to execute and allocates CPU to one of
them.

 It is the change of ready state to running state of the
process.

 Its main objective is to increase system performance.

75 31

2.2 Process Management

Scheduler Criteria

CPU Utilization : keep CPU busy 100% of time
Fairness: all processes get fair share of the CPU
Turnaround: minimize the time users must
wait for output
Throughput: maximize number of jobs per
hour

76 32

2.2 Process Management

Basic For Comparison

Basic for Preemptive Non Preemptive
Comparison Scheduling Scheduling

Basic A process can be Once a process send to
Interrupt
suspended by other CPU that process can

process be not be suspended

Process Can be removed Run until completion
from current CPU

Response time Improved response Short process need to
time wait long process

Environments Important for Suitable for batch

interactive process process

Cost Cost associated Not cost associated
Example SRTF, LRTF, RR etc FCFS, SJF, etc

77 33

2.2 Process Management

Types of Scheduling Algorithms

78 34

2.2 Process Management

Scheduling Criteria

a. Throughput : Number of processes completed per
unit time

b. Turnaround time :
 The interval from the time of submission of a
process to the time of completion
 Waiting time of process + execution time

c. Waiting time : Sum of periods spend waiting in the
ready queue

d. Response time : Time taken to start responding

79 35

2.2 Process Management Scheduling Algorithms

>

First In First Out

 Non-preemptive.
 Handles jobs according to their arrival time -- the

earlier they arrive, the sooner they’re served.
 Simple algorithm to implement -- uses a FIFO queue.
 Good for batch systems; not so good for interactive

ones.
 Unfortunately, FCFS can yield some very long average

wait times, particularly if the first process to get there
takes a long time

80 36

2.2 Process Management Scheduling Algorithms

>

Example: First In First Out

Process Arrival Burst
Time Time
A 12
B 0
C 0 4
D 0 3
0 5

A 12 B C D

0 16 19 24

Process Arrival Burst Completion Turn Around Waiting
Time Time Time Time Time
(CT-AT)
A0 12 12 12 (TAT-BT)
B0 4 16 16 0
C0 3 19 19 12
D0 5 24 24 16
19

Average Turn Around Time: = (12+16+19+24)/4
= 17.75

Average waiting Time: = (0+12+16+19)/4
= 11.75

81 37

2.2 Process Management Scheduling Algorithms

>

Shortest Job First

 Non-preemptive.
 Reduces the number of waiting processes
 Use lengths to schedule process with shortest time.
 Optimal – gives minimum average waiting time for a

given set of processes, optimal only when all of jobs
are available at same time and the CPU estimates are
available and accurate.
 Doesn’t work in interactive systems because users
don’t estimate in advance CPU time required to run
their jobs
 Preemptive SJF is sometimes referred to as shortest
remaining time first scheduling

82 38

2.2 Process Management Scheduling Algorithms

>

Example: Shortest Job First

Process Arrival Burst
Time Time
A 12
B 0
C 0 4
D 0 3
0 5

C B D A

03 7 12 24

Process Arrival Burst Completion Turn Around Waiting
Time Time Time Time Time
(CT-AT)
A0 12 24 24 (TAT-BT)
B0 4 7 7 12
C0 3 3 3 3
D0 5 12 12 0
7

Average Turn Around Time: = (24+7+3+12)/4
= 11.5

Average waiting Time: = (12+3+0+7)/4
= 5.5

83 39

2.2 Process Management Scheduling Algorithms

>

Priority

 Preemptive and Non-preemptive.
 Gives preferential treatment to important jobs.
 Programs with highest priority are processed first.
 Aren’t interrupted until CPU cycles are completed or a

natural wait occurs.
 If 2+ jobs with equal priority are in READY queue,

processor is allocated to one that arrived first (first
come first served within priority).
 Problem >> Starvation – low priority processes may
never execute
 Solution >> Aging – as time progresses increase the
priority of the process

84 40

2.2 Process Management Scheduling Algorithms

>

Example: Priority

Process Arrival Burst Pr
Time Time
12 3
A0 1
4 4
B0 3 2
5
C0

D0

B D A C

04 9 21 24

Process Arrival Burst Completion Turn Around Waiting
Time Time Time Time Time
(CT-AT)
A0 12 21 21 (TAT-BT)
B0 4 4 4 9
C0 3 24 24 0
D0 5 9 9 21
4

Average Turn Around Time: = (21+4+24+9)/4
= 14.5

Average waiting Time: = (0+12+16+19)/4
= 8.5

85 41

2.2 Process Management Scheduling Algorithms

>

Round Robin

 Preemptive and based on FIFO
 Processes run only for a limited amount of time called

a time slice or quantum
 Used extensively in interactive systems because it’s

easy to implement.
 Ensures CPU is equally shared among all active

processes and isn’t monopolized by any one job.
 Time slice is called a time quantum : size crucial to

system performance (100 ms to 1-2s)

86 42

2.2 Process Management Scheduling Algorithms

>

Example: Round Robin

Process Arrival Burst Example, with Quantum time /
Time Time Time Slice = 4
A 12
B 0
C 0 4
D 0 3
0 5

A B C D AD A

0 4 8 11 15 19 20 24

Process Arrival Burst Completion Turn Around Waiting
Time Time Time Time Time
(CT-AT)
A 0 12 24 24 (TAT-BT)
B04 8 8 12
C03 11 11 4
D0 5 20 20 8
15

Average Turn Around Time: = (24+8+11+20)/4
= 15.75

Average waiting Time: = (12+4+8+15)/4
= 9.75

87 43

2.2 Process Management Scheduling Algorithms

>

Multilevel Queue

 Ready queue is partitioned into separate queues, eg:
 foreground (interactive)
 background (batch)

 Process permanently in a given queue

 Each queue has its own scheduling algorithm:
 foreground – Round Robin
 background – FIFO

88 44

2.2 Process Management Scheduling Algorithms

>

Multilevel Feedback Queue

 Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS

 Scheduling
 A new job enters queue Q0 which is served FCFS
 When it gains CPU, job receives 8 milliseconds
 If it does not finish in 8 milliseconds, job is
moved to queue Q1

 At Q1 job is again served FCFS and receives 16
additional milliseconds
 If it still does not complete, it is preempted and
moved to queue Q2

89 45

2.2 Process Management

Definition

 In programming, a process that is part of a larger
process or program
 In operating system, a thread is a basic unit of
CPU utilization; it comprises a thread ID, a
program counter, a register set, and a stack.
 It shares with other threads belonging to the
same process its code section, data section,
and other operating-system resources, such as
open files and signals.

 A traditional (or heavyweight) process has a single
thread of control

 If a process has multiple threads of control, it can
perform more than one task at a time

 Most software applications that run on modern
computers are multithreaded

90 46

2.2 Process Management

Single Threads vs. Multiple Threads

91 47


Click to View FlipBook Version