The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by Rahul Rahi, 2019-11-14 09:12:27

MEP

MEP

TE-IT: MEP Module 4 - ARM 7 Architecture

4.12.9 Arithmetic Shift Right by Immediate

Syntax Rm, ASR #(value)

Operation ( op1) ← Rm Arithmetic_Shift_Right
IR(value) if IR(value) = 0

ALU(C) ← Rm(31)
else

ALU(C) ← Rm(IR(value) - 1)

Description This operand is used to provide the signed value of a register arithmetically shifted
right (divided by a constant power of two).

This instruction operand is the value of register Rm, arithmetically shifted right by
an immediate (value i)n the range 1 to 32. The sign bit of Rm (bit 31) is inserted into
the vacated bit positions, thus maintaining the sign of the value. The carry flag is
the last bit shifted out.

Notes If the PC is specified as register Rm, the value used is the address of the next
instruction, that is to say the address of the current instruction plus 8.

4.12.10 Arithmetic Shift Right by Register

Syntax Rm, ASR Rs

Operation (op1 ) ← Rm Arithmetic Shift Right Rs(7:0)

if Rs(7:0) ƒ= 0

ALU(C) ← Rm(Rs(7:0) - 1)

Description This operand is used to provide the signed value of a register arithmetically shifted
right (divided by a variable power of two).

This instruction operand is the value of register Rm arithmetically shifted right by
the value in the least significant byte of register Rs. The sign bit of Rm (bit 31) is
inserted into the vacated bit positions. The carry flag is the last bit shifted out,
which is the sign bit of Rm if the shift amount is more than 32, or unaffected if the
shift amount is zero.

Notes Specifying the PC as register Rm, or register Rs has UNPREDICTABLE results.

4.12.11 Rotate Right by Immediate

Syntax Rm, ROR #(value)

Operation (op1 ) ← Rm Rotate Right IR(value)
ALU(C) ← Rm(IR(value) - 1)

Description This operand is used to provide the value of a register rotated by a constant value.

This instruction operand is the value of register Rm rotated right by animmediate
(value )in the range 1 to 31. As bits are rotated off the right end, they are inserted

Asst. Prof. Selvin Furtado [201]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 4 - ARM 7 Architecture

Notes into the vacated bit positions on the left. The carry flag is the last bit rotated off the
right end.

If the PC is specified as register Rm, the value used is the address of the next
instruction, that is to say the address of the current instruction plus 8.

Asst. Prof. Selvin Furtado [202]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 4 - ARM 7 Architecture

4.12.12 Rotate Right by Register

Syntax Rm, ROR Rs

Operation (op1 ) ← Rm Rotate Right Rs(4:0)

if Rs(4:0) ≠ 0
ALU(C) ← Rm(Rs(4:0) - 1)

Description This operand is used to provide the value of a register rotated by a variable value.

This instruction operand is produced by the value of register Rm rotated right by
the value in the least significant byte of register Rs. As bits are rotated off the right
end, they are inserted into the vacated bit positions on the left. The carry flag is
the last bit rotated off the right end, or unaffected if the shift amount is zero.

Notes Specifying the PC as register Rm, or register Rs has UNPREDICTABLE results.

4.12.13 Rotate Right with Extend

Syntax RRm, RRX

Operation (op1 ) ← (CPSR(C) Logical Shift Left 31) OR (Rm Logical Shift Right 1)
ALU(C) ← Rm(0)

Description This operand can be used to perform a 33-bit rotate right using the Carry Flag as
the 33rd bit.

This instruction operand is the value of register Rm shifted right by one bit, with
the Carry Flag replacing the vacated bit position. The carry flag is the bit shifted
off the right end.

Notes A rotate left with extend can be performed with an ADC instruction (A.2 on
page 136).

If the PC is specified as register Rm, the value used is the address of the next
instruction, that is to say the address of the current instruction plus 8.

4.13Memory Access

There are nine addressing modes used to calculate the address for a Load and Store Word or
Unsigned Byte instruction. The general instruction syntax is:

opcode(cc)(B)(H )(T) Rd , (op2 )

where (op2 ) is one of the nine options listed in table 5.2 on the next page.

4.13.1 Immediate Offset

Syntax [Rn, #±(value)] Operation

(op2 ) ← Rn + IR(value)

Description This addressing mode calculates an address by adding or subtracting the value of

Asst. Prof. Selvin Furtado [203]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 4 - ARM 7 Architecture

an immediate offset to or from the value of the base register Rn.

Asst. Prof. Selvin Furtado [204]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 4 - ARM 7 Architecture

Syntax Mode

1 [Rn, #±(value )] Immediate offset
2 [Rn, Rm]
Register offset
3 [Rn, Rm, ( shift) #(value)] Scaled register offset
4 [Rn, #±(value )]!
5 [Rn, Rm]! Immediate pre-indexed

Register pre-indexed
6 [Rn, Rm, ( shift) #(value )]! Scaled register pre-indexed
7 [Rn], #±(value )
8 [Rn], Rm Immediate post-indexed

Register post-indexed

9 [Rn], Rm, (shift) #(value) Scaled register post-indexed

Table 5.2: Memory Addressing Modes

Usage This addressing mode is useful for accessing structure (record) fields, and accessing
Notes parameters and local variables in a stack frame. With an offset of zero, the address
produced is the unaltered value of the base register Rn.

The syntax [Rn] is treated as an abbreviation for [Rn, #0].

If the PC is specified as register Rm, the value used is the address of the next
instruction, that is to say the address of the current instruction plus 8.

4.13.2 Register Offset

Syntax [Rn, Rm] Operation
(op2 ) ← Rn + Rm

Description This addressing mode calculates an address by adding or subtracting the value of
the index register Rm to or from the value of the base register Rn.

Usage This addressing mode is used for pointer plus offset arithmetic, and accessing a
single element of an array of bytes.

Notes If the PC is specified as register Rn, the value used is the address of the next
instruction, that is to say the address of the current instruction plus 8. Specifying
the PC as register Rm, has UNPREDICTABLE results.

4.13.3 Scaled Register Offset

Syntax One of:

[(Rn), Rm, LSL #(value)]

[(Rn), Rm, LSR #(value)]

[(Rn), Rm, ASR #(value)]

[(Rn), Rm, ROR #(value)]

[(Rn), Rm, RRX]

Operation LSL: index ← Rm Logical_Shift_Left IR(value)

LSR: index ← Rm Logical_Shift_Right IR(value)
ASR: index ← Rm Arithmetic Shift_Right IR(value)
ROR: index ← Rm Rotate_Right IR(value)

Asst. Prof. Selvin Furtado [205]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 4 - ARM 7 Architecture

RRX: index ← (CSPR(C) Logical_Shift Left 31) OR (Rm Logical_Shift_Right 1)
(op2 ) ← Rn + index

Asst. Prof. Selvin Furtado [206]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 4 - ARM 7 Architecture

Description These five addressing modes calculate an address by adding or subtracting the
shifted or rotated value of the index register Rm to or from the value of the base
register Rn.

Usage These addressing modes are used for accessing a single element of an array of values
larger than a byte.

Notes If the PC is specified as register Rn, the value used is the address of the next
instruction, that is to say the address of the current instruction plus 8. Specifying
the PC as register Rm, has UNPREDICTABLE results.

4.13.4 Immediate Pre-indexed

Syntax [Rn, #±(value)]!

Operation (op2 ) ← Rn + IR(value)
(cc): Rn ← (op2 )

Description This addressing mode calculates an address by adding or subtracting the value of
an immediate (value) to or from the value of the base register Rn.
If the condition specified in the instruction (( cc) ) matches the condition code status,
the calculated address is written back to the base register Rn.

Usage This addressing mode is used for pointer access to arrays with automatic update
of the pointer value.

Notes Specifying the PC as register Rm, has UNPREDICTABLE results.

4.13.5 Register Pre-indexed

Syntax [Rn, Rm]!

Operation (op2 ) ← Rn + Rn
(cc): Rn ← (op2 )

Description This addressing mode calculates an address by adding or subtracting the value of
an index register Rm to or from the value of the base register Rn.
If the condition specified in the instruction (( cc) ) matches the condition code status,
the calculated address is written back to the base register Rn.

Notes If the same register is specified for Rn and Rm, the result is UNPREDICTABLE.
Specifying the PC as register Rm, has UNPREDICTABLE results.

4.13.6 Scaled Register Pre-indexed

Syntax One of:

Asst. Prof. Selvin Furtado [207]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 4 - ARM 7 Architecture

[Rn, Rm, LSL #( value) ]!
[Rn, Rm, LSR #( value) ]!
[Rn, Rm, ASR #( value) ]!
[Rn, Rm, ROR #( value) ]!
[Rn, Rm, RRX]!

Asst. Prof. Selvin Furtado [208]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP LSL: index ← Rm Logical_Shift_Left Module 4 - ARM 7 Architecture

Operation IR(value)

LSR: index ← Rm Logical_Shift_Right IR(value)
ASR: index ← Rm Arithmetic_Shift_Right IR(value)
ROR: index ← Rm Rotate Right IR(value)
RRX: index ← (CSPR(C) Logical_Shift_Left 31) OR (Rm Logical_Shift_Right 1)

(op2 ) ← Rn + index

(cc): Rn ← (op2 )

Description These five addressing modes calculate an address by adding or subtracting the
shifted or rotated value of the index register Rm to or from the value of the base
register Rn.
If the condition specified in the instruction (< cc >) matches the condition code
status, the calculated address is written back to the base register Rn.

Notes If the same register is specified for Rn and Rm, the result is UNPREDICTABLE.
Specifying the PC as register Rm, has UNPREDICTABLE results.

4.13.7 Immediate Post-indexed

Syntax [Rn], #±(value)

Operation (op2 ) ← Rn

(cc): Rn ← Rn + IR(value)

Description This addressing mode uses the value of the base register Rn as the address for the
memory access.

If the condition specified in the instruction (< cc >) matches the condition code
status, the value of the immediate offset is added to or subtracted from the value
of the base register Rn and written back to the base register Rn.

Usage This addressing mode is used for pointer access to arrays with automatic update
of the pointer value.

Notes Specifying the PC as register Rm, has UNPREDICTABLE results.

4.13.8 Register Post-indexed

Syntax [Rn], Rm

Operation (op2 ) ← Rn

(cc): Rn ← Rn + Rm

Description This addressing mode uses the value of the base register Rn as the address for the
memory access.

If the condition specified in the instruction (< cc >) matches the condition code
status, the value of the index register Rm is added to or subtracted from the value
of the base register Rn and written back to the base register Rn.

Asst. Prof. Selvin Furtado [209]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 4 - ARM 7 Architecture

Notes If the same register is specified for Rn and Rm, the result is UNPREDICTABLE.

Specifying the PC as register Rm, has UNPREDICTABLE results.

Asst. Prof. Selvin Furtado [210]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 4 - ARM 7 Architecture

4.13.9 Scaled Register Post-indexed

Syntax One of:

[Rn], Rm, LSL #(value )
[Rn], Rm, LSR #(value )
[Rn], Rm, ASR #(value )
[Rn], Rm, ROR #(value )
[Rn], Rm, RRX

Operation (op2 ) ← Rn

LSL: index ← Rm Logical_Shift_Left IR(value)

LSR: index ← Rm Logical_Shift_Right IR(value)
ASR: index ← Rm Arithmetic_Shift_Right IR(value)
ROR: index ← Rm Rotate_Right IR(value)
RRX: index ← (CSPR(C) Logical_Shift_Left 31) OR (Rm Logical_Shift_Right 1)

(cc): Rn ← Rn + index

Description This addressing mode uses the value of the base register Rn as the address for
the memory access.

If the condition specified in the instruction (< cc >) matches the condition code
status, the shifted or rotated value of index register Rm is added to or subtracted
from the value of the base register Rn and written back to the base register Rn.

Notes If the same register is specified for Rn and Rm, the result is UNPREDICTABLE.
Specifying the PC as register Rm, has UNPREDICTABLE results.

4.14The ARM Pipeline

ARM processors up to the ARM? employ a simple 3-stage pipeline with the following pipeline stages:

- Fetch – the instruction is fetched from memory and placed in the instruction pipeline.
- Decode – the instruction is decoded and the data path control signals prepared for the next

cycle. In this stage the instruction 'owns' the decode logic but not the data path.
- Execute – the instruction 'owns' the data path; the register bank is read, an operand shifted,

the ALU result generated and written back into a destination register.

At any one time, three different instructions may occupy each of these stages, so the hardware in each
stage must be capable of independent operation.

4.14.1 3-stage pipeline ARM organization.

A pipeline is the mechanism a RISC processor uses to execute instructions. Using a pipeline speeds up
execution by fetching the next instruction while other instructions are being decoded and executed.

Figure 30- ARM7 Three-stage pipeline. [211]
Figure 30 shows a three-stage pipeline: Top↑

1. Fetch loads an instruction from memory.

Asst. Prof. Selvin Furtado
Dept. Electronic & Telecommunication Engg.

TE-IT: MEP Module 4 - ARM 7 Architecture
2. Decode identifies the instruction to be executed.

3. Execute processes the instruction and writes the result back to a register.

Figure 31- Pipelined instruction sequence.

Figure 31 illustrates the pipeline using a simple example.
1. It shows a sequence of three instructions being fetched, decoded, and executed by the
processor.
2. Each instruction takes a single cycle to complete after the pipeline is filled. The three
instructions are placed into the pipeline sequentially.
3. In the first cycle the core fetches the ADD instruction from memory.
4. In the second cycle the core fetches the SUB instruction and decodes the ADD instruction.
5. In the third cycle, both the SUB and ADD instructions are moved along the pipeline. The ADD
instruction is executed, the SUB instruction is decoded, and the CMP instruction is fetched.
6. This procedure is called filling the pipeline. The pipeline allows the core to execute an
instruction every cycle.
7. As the pipeline length increases, the amount of work done at each stage is reduced, which
allows the processor to attain a higher operating frequency. This in turn increases the
performance.
8. The system latency also increases because it takes more cycles to fill the pipeline before the
core can execute an instruction.
9. The increased pipeline length also means there can be data dependency between certain
stages.

4.14.2 Pipeline Executing Characteristics

Figure 32- ARM instruction sequence. [212]
Top↑
Asst. Prof. Selvin Furtado
Dept. Electronic & Telecommunication Engg.

TE-IT: MEP Module 4 - ARM 7 Architecture

1. The ARM pipeline has not processed an instruction until it passes completely through the

execute stage.

2. For example, an ARM7 pipeline (with three stages) has executed an instruction only when the

fourth instruction is fetched. Figure 32 shows an instruction sequence on an ARM7 pipeline.

Figure 33- Example: pc = address + 8.

3. The MSR instruction is used to enable IRQ interrupts, which only occurs once the MSR
instruction completes the execute stage of the pipeline. It clears the I bit in the cpsr to enable
the IRQ interrupts.

4. Once the ADD instruction enters the execute stage of the pipeline, IRQ interrupts are enabled.
5. Figure 4 illustrates the use of the pipeline and the program counter pc.
6. In the execute stage, the pc always points to the address of the instruction plus 8 bytes. In

other words, the pc always points to the address of the instruction being executed plus two
instructions ahead.
7. This is important when the pc is used for calculating a relative offset and is an architectural
characteristic across all the pipelines.

Note: In Thumb state the pc is the instruction address plus 4.

8. The other important characteristics of the pipeline are:
i. The execution of a branch instruction or branching by the direct modification of the
pc causes the ARM core to flush its pipeline.
ii. ARM10 uses branch prediction, which reduces the effect of a pipeline flush by
predicting possible branches and loading the new branch address prior to the
execution of the instruction.
iii. An instruction in the execute stage will complete even though an interrupt has been
raised. Other instructions in the pipeline will be abandoned, and the processor will
start filling the pipeline from the appropriate entry in the vector table.

The simplest way to view breaks in the ARM pipeline is to observe that:

- All instructions occupy the data path for one or more adjacent cycles.
- For each cycle that an instruction occupies the data path, it occupies the decode logic in the

immediately preceding cycle.
- During the first data path cycle each instruction issues a fetch for the next instruction but one.
- Branch instructions flush and refill the instruction pipeline.

Asst. Prof. Selvin Furtado [213]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 4 - ARM 7 Architecture

4.14.3 PC behaviour

One consequence of the pipelined execution model used on the ARM is that the program counter,

which is visible to the user as r15, must run ahead of the current instruction. If, as noted above,

instructions fetch the next instruction but one during their first cycle, this suggests that the PC must

point eight bytes (two instructions) ahead of the current instruction.

This is, indeed, what happens, and the programmer who attempts to access the PC directly through
r15 must take account of the exposure of the pipeline here. However, for most normal purposes the
assembler or compiler handles all the details.

Even more complex behaviour is exposed if r15 is used later than the first cycle of an instruction, since
the instruction will itself have incremented the PC during its first cycle. Such use of the PC is not often
beneficial, so the ARM architecture definition specifies the result as 'unpredictable' and it should be
avoided, especially since later ARMs do not have the same behaviour in these cases.

Asst. Prof. Selvin Furtado [214]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS
5 Module 5 - Open Source RTOS

5.1 Basics of RTOS: Real time concepts

Real time is a level of responsiveness that a user senses as sufficiently immediate or that enables the
computer to keep up with some external process. Real time describes a human rather than machine
sense of time. It is the class of computers systems that interacts with the external world in a time
frame defined by the external world. It is the system in which the correctness of the computations not
only depends upon the logical correctness of the computation but also upon the time at which the
result is produced. If the timing constraints of the system are not met system failure is said to have
occurred.

A real time operating system (RTOS) is an operating system that guarantees a certain capability within
a specified time constraint. An OS is a system program that provides an interface between application
programs and the computer system (hardware). The applications where dependability that a certain
task will finish before a particular deadline is just as obtaining the correct results. Besides meeting
deadlines RTOS must also be able to respond predictably to unpredictable events and process multiple
events concurrently. A system application/computer/operating system operates in real time to the
degree that those of its actions which have time constraints are performed with acceptable timeliness.
A system is real time the degree that it employs real time resource management. The resources are
explicitly managed for the purpose of operating in real time. The system operating in real time needs
an appropriate balance of real time resource management & hardware resource capacity.

5.2 Hard Real time and Soft Real-time

Depending on the consequences of a task missing its deadline, real-time tasks are usually
distinguished in three categories:

5.2.1 Hard

 A real-time task is said to be hard if missing a single deadline may cause catastrophic
consequences on the system under control.

 A system where “something very bad” happens if the deadline is not met.

 Applications having hard real-time tasks are safety critical…. any failure leads to severe
consequences.

 Example: Control systems for aircraft, nuclear reactors, chemical power plants, Pacemaker
etc.

5.2.2 Firm

 A real-time task is said to be firm if missing its deadline does not cause any damage to the
system, but the output has no value.

 Unlike, a hard-real-time task, even when a firm real-time task does not complete within its
deadline, the system does not fail.

Asst. Prof. Selvin Furtado [215]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS
 The late results are merely discarded.

 The utility of the results computed by a firm real-time task becomes zero after deadline.

 Example: Video Conferencing, Satellite-Based Tracking of Enemy Movement.

5.2.3 Soft

 A real-time task is said to be soft if missing its deadline has still some utility for the system,
although causing performance degradation.

 A system where the performance is degraded below than what is generally considered
acceptable if the deadline is missed

Example: multimedia system, railway reservation system

5.3 Differences between GPOS & RTOS

RTOS – Real Time Operating System GPOS – General Purpose Operating System

RTOS has unfair scheduling i.e. scheduling GPOS has fair scheduling i.e. it can be adjusted

is based on priority. dynamically for optimized throughput.

Kernel is pre-emptive either completely or Kernel is non pre-emptive or has long non pre-

up to maximum degree. emptive code sections.

Priority inversion is a major issue. Priority inversion usually remains unnoticed.

It has a predictable behaviour. There is no predictability.

It works under worst case assumptions. It optimizes for the average case.

It does not have a large memory. It has a large memory.

Kernel space vs user space or Real-Time space:

• User space has more protection against erroneous access to physical memory of I/O devices
larger latencies.

• Real-time space is a part of kernel space and is used in a particular way.

Monolithic Kernel vs micro-Kernel

• A monolithic kernel has all OS services (including device drivers, network stacks, file systems,
etc.) running within the privileged mode of the processor.

• A micro-kernel, on the other hand, uses the privileged mode only for really core services (task
management and scheduling, inter-process communication, interrupt handling, and memory
management), and has most of the device drivers and OS services running as “normal” tasks.
It is difficult to crash.

• UNIX, Linux and Microsoft NT have monolithic kernels; QNX, FIASCO, VxWorks, and GNU/Hurd
have micro-kernels.

Asst. Prof. Selvin Furtado [216]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

5.4 Basic architecture of an RTOS

5.5 Scheduling Systems

Multitasking involves the execution switching among the different tasks. There should be some
mechanism in place to share the CPU among the different tasks and to decide which process/task is
to be executed at a given point of time. Determining which task/process is to be executed at a given
point of time is known as task/process scheduling. Task scheduling forms the basis of multitasking.
Scheduling policies forms the guidelines for determining which task is to be executed when. The
scheduling policies are implemented in an algorithm and it is run by the kernel as a service. The kernel
service/application, which implements the scheduling algorithm, is known as ‘Scheduler’.

Note: A task that can potentially execute on the processor, independently on its actual availability,
1. is called an active task.
A task waiting for the processor is called a ready task, whereas the task in execution is called
2. a running task.
Ready tasks waiting for the processor are kept in a queue, called ready queue.
3.

Based on the scheduling algorithms used, the scheduling can be classified into the following
categories:

5.5.1 Non-pre-emptive Scheduling

In this scheduling type, the currently executing task/process is allowed to run until it terminates or
enters the ‘Wait’ state waiting for an I/O or system resource. The various types of non-pre-emptive
scheduling adopted in task/process scheduling are listed below.

5.5.1.1 First-Come-First-Served (FCFS)/ FIFO Scheduling
As the name indicates, the First-Come-First-Served (FCFS) scheduling algorithm allocates CPU time to
the processes based on the order in which they enter the ‘Ready’ queue. The first entered process is
serviced first. It is same as any real-world application where queue systems are used; e.g.-Ticketing

Asst. Prof. Selvin Furtado [217]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

reservation system where people need to stand in a queue and the first person standing in the queue

is serviced first. FCFS scheduling is also known as First in First Out (FIFO) where the process which is

put first into the ‘Ready’ queue is serviced first.

The major drawback of FCFS algorithm is that it favours monopoly of process. A process, which does
not contain any I/O operation, continues its execution until it finishes its task. If the process contains
any I/O operation, the CPU is relinquished by the process. In general, FCFS favours CPU bound
processes and. I/O bound processes may have to wait until the completion of CPU bound process, if
the currently executing process is a CPU bound process. This leads to poor device utilisation. The
average waiting time is not minimal for FCFS scheduling algorithm.

5.5.1.2 Last-Come-First Served (LCFS)/LIFO Scheduling
The Last-Come-First Served (LCFS) scheduling algorithm also allocates CPU time to the processes based
on the order in which they are entered in the ‘Ready’ queue. The last entered process is serviced first.
LCFS scheduling is also known as Last in First Out (LIFO) where the process, which is put last into the
‘Ready’ queue, is serviced first.

LCFS scheduling is not optimal and it also possesses the same drawback as that of FCFS algorithm.

5.5.1.3 Shortest job First (SJF) Scheduling
Shortest Job First (SJF) scheduling algorithm ‘sorts the ‘Ready’ queue’ each time a process relinquishes
the CPU (either the process terminates or enters the ‘Wait’ state waiting for I/O or system resource)
to pick the process with shortest (least) estimated completion/run time. In SJF, the process with the
shortest estimated run time is scheduled first, followed by the next shortest process, and so on.

The average waiting time for a given set of process is minimal in SJF scheduling and so it is optimal
compared to other non-pre-emptive scheduling like FCFS. The major drawback of SJ F algorithm is that
a process whose estimated execution completion time is high may not get a chance to execute if more
and more processes with least estimated execution time enters the ‘Ready’ queue before the process
with longest estimated execution time started its execution (In non-pre-emptive SJF). This condition
is known as ‘Starvation’. Another drawback of SJF is that it is difficult to know in advance the next
shortest process in the ‘Ready’ queue for scheduling since new processes with different estimated
execution time keep entering the ‘Ready’ queue at any point of time.

5.5.1.4 Priority Based Scheduling
The Turn Around Time (TAT) and waiting time for processes in non-pre-emptive scheduling varies with
the type of scheduling algorithm. Priority based non-pre-emptive scheduling algorithm ensures that a
process with high priority is serviced at the earliest compared to other low priority processes in the
‘Ready’ queue. The priority of a task/process can be indicated through various mechanisms. The non-
pre-emptive priority-based scheduler sorts the ‘Ready’ queue based on priority and picks the process
with the highest level of priority for execution.

Similar to SJF scheduling algorithm, non-pre-emptive priority-based algorithm also possess the
drawback of ‘Starvation’ where a process whose priority is low may not get a chance to execute if
more and more processes with higher priorities enter the ‘Ready’ queue before the process with lower
priority started its execution. ‘Starvation’ can be effectively tackled in priority based non-pre-emptive
scheduling by dynamically raising the priority of the low priority task/process which is under starvation

Asst. Prof. Selvin Furtado [218]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

(waiting in the ready queue for a longer time for getting the CPU time). The technique of gradually

raising the priority of processes which are waiting in the ‘Ready’ queue as time progresses, for

preventing ‘Starvation’, is known as ‘Aging’. '

5.5.2 Pre-emptive Scheduling

In pre-emptive scheduling, every task in the ‘Ready’ queue gets a chance to execute. When and how
often each process gets a chance to execute (gets the CPU time) is dependent on the type of pre-
emptive scheduling algorithm used for scheduling the processes. In this kind of scheduling, the
scheduler can pre-empt (stop temporarily) the currently executing task/process and select another
task from the ‘Ready’ queue for execution. When to pre-empt a task and which task is to be picked up
from the ‘Ready’ queue for execution after pre-empting the current task is purely dependent on the
scheduling algorithm. A task which is pre-empted by the scheduler is moved to the ‘Ready’ queue. The
act of moving a ‘Running’ process/task into the ‘Ready’ queue by the scheduler, without the processes
requesting for it is known as ‘Pre-emption’. Pre-emptive scheduling can be implemented in different
approaches. The two important approaches adopted in pre-emptive scheduling are time-based pre-
emption and priority-based pre-emption. The various types of pre-emptive scheduling adopted in
task/process scheduling are explained below.

5.5.2.1 Pre-emptive SJF Scheduling/Shortest Remaining Time (SRT)
The non-pre-emptive SJF scheduling algorithm sorts the ‘Ready’ queue only after completing the
execution of the current process or when the process enters ‘Wait’ state, whereas the pre-emptive
SJF scheduling algorithm sorts the ‘Ready’ queue when a new process enters the ‘Ready’ queue and
checks whether the execution time of the new process is shorter than the remaining of the total
estimated time for the currently executing process. If the execution time of the new process is less,
the currently executing process is pre-empted and the new process is scheduled for execution. Thus,
pre-emptive SJF scheduling always compares the execution completion time (It is same as the
remaining time for the new process) of a new process entered the ‘Ready’ queue with the remaining
time for completion of the currently executing process and schedules the process with shortest
remaining time for execution. Pre-emptive SJF scheduling is also known as Shortest Remaining Time
(SRT) scheduling.

5.5.2.2 Round Robin (RR) Scheduling
In Round Robin scheduling, each process in the ‘Ready’ queue is executed for a predefined time slot.
The execution starts with picking up the first process in the ‘Ready’ queue. It is executed for a
predefined time and when the predefined time elapses or the process completes (before the pre-
defined time slice), the next process in the ‘Ready’ queue is selected for execution. This is repeated
for all the processes in the ‘Ready’ queue. Once each process in the ‘Ready’ queue is executed for the
predefined time period, the scheduler comes back and picks the first process in the ‘Ready’ queue
again for execution. The sequence is repeated. This reveals that the Round Robin scheduling is similar
to the FCFS scheduling and the only difference is that a time slice-based pre-emption is added to
switch the execution between the processes in the ‘Ready’ queue. The ‘Ready’ queue can-be
considered as a circular queue in which the scheduler picks up the first process for execution and
moves to the next till the end of the queue and-then comes back to the beginning of the queue to pick
up the first process.

RR scheduling involves lot of overhead in maintaining the time slice information for every process
which is currently being executed.

Asst. Prof. Selvin Furtado [219]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

5.5.2.3 Priority Based Scheduling

Priority based pre-emptive scheduling algorithm is same as that of the non-pre-emptive priority-based

scheduling except for the switching of execution between tasks. In pre-emptive scheduling, any high

priority process entering the ‘Ready’ queue is immediately scheduled for execution whereas in the

non-pre-emptive scheduling any high priority process entering the ‘Ready’ queue is scheduled only

after the currently executing process completes its execution or only when it voluntarily relinquishes

the CPU. The priority of a task/process in pre-emptive scheduling is indicated in the same way as that

of the mechanism, adopted for non- pre-emptive multitasking.

5.6 Performance Matrix in scheduling models

The selection of a scheduling criterion/algorithm should consider the following factors:

CPU Utilisation: The scheduling algorithm should always make the CPU utilisation high. CPU utilisation
is a direct measure of how much percentage of the CPU is being utilised.

Throughput: This gives an indication of the number of processes executed per unit of time. The
throughput for a good scheduler should always be higher.

Turnaround Time: It is the amount of time taken by a process for completing its execution. It includes
the time spent by the process for waiting for the main memory, time spent in the ready queue, time
spent on completing the I/O operations, and the time spent in execution. The turnaround time should
be a minimal for a good scheduling algorithm.

Waiting Time: It is the amount of time spent by a process in the ‘Ready’ queue waiting to get the CPU
time for execution. The waiting time should be minimal for a good scheduling algorithm.

Response Time: It is the time elapsed between the submission of a process and the first response. For
a good scheduling algorithm, the response time should be as least as possible.

To summarise, a good scheduling algorithm has high CPU utilization, minimum Turn Around Time
(TAT), maximum throughput and least response time.

5.7 Priority Inversion

Q. Explain briefly the problem for priority inversion and mechanism to prevent the same.

5.7.1 Problem of priority inversion:

1. Priority inversion is the condition in which a high priority task needs to wait for a low priority
task to release a resource between the medium priority task and a low priority task.

2. Priority based pre-emptive scheduling allows a higher priority task to execute first whereas
lock-based process synchronization using a mutex or semaphore allows the resource to be
used only by one task. These two concepts lead to the problem of priority inversion.

3. If a high priority task is being executed and if it requires a shared resource which is currently
with a low priority task. Then in such a case the kernel does not allow high priority task to
execute and low priority task takes control of the CPU. The priority of the high priority task is
inverted. The problem becomes severe if some medium priority task interrupts the low
priority task.

Asst. Prof. Selvin Furtado [220]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

The diagram explains the problem of priority inversion in detail, consider 3 tasks, task 1 with highest

priority, task2 medium priority and task3 with the lowest priority. From the Figure 34:

1. Task 1 and Task 2 are both waiting for an event to occur and Task 3 is executing.

2. At some point, Task 3 acquires a semaphore, which the task needs before it can access a
shared resource.

3. Task 3 performs some operations on the acquired resource.

4. The event for which Task 1 was waiting occurs, and thus the kernel suspends Task 3 and starts
executing Task 1 because Task 1 has a higher priority.

Figure 34- Priority Inversion Explanation.

5. Task 1 continues execution

6. Task 1 executes for a while until it also wants to access the resource (i.e., it attempts to get
the semaphore that Task 3 owns). Because Task 3 owns the resource, Task 1 is placed in a list
of tasks waiting for the kernel to free the semaphore.

7. Task 3 resumes execution since the CPU control is now transferred from task1 to task3.

8. Task 3 resumes and continues execution until it is pre-empted by Task 2 because the event
for which Task 2 was waiting occurred.

Asst. Prof. Selvin Furtado [221]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS
9. Task 2 continues execution

10. Task 2 handles the event for which it was waiting, and, when it’s done, the kernel relinquishes
the CPU back to Task 3.

11. Task 3 continues execution

12. Task 3 finishes working with the resource and releases the semaphore. At this point, the kernel
knows that a higher priority task is waiting for the semaphore and performs a context switch
to resume Task 1.

13. At this point, Task 1 has the semaphore and can access the shared resource.

The priority of Task 1 has been virtually reduced to that of Task 3 because Task 1 was waiting for the
resource that Task 3 owned. The situation was aggravated when Task 2 pre-empted Task 3, which
further delayed the execution of Task 1.

5.7.2 Solving the problem of priority inversion:

1. In the above example since task 3 was stopped from executing by task2 at step 8 due to its
lower priority, priority inversion was observed.

2. The problem can be solved by increasing the priority if task3 equal to that of task1 from the
moment task1 starts waiting for the semaphore, i.e. at step 4.

3. Task 3 can now execute without any interruption from task2 because of its higher priority.

4. On completing execution, task3 releases semaphore which is then acquired by task1. Thus,
task1 continue its execution. This work around for solving the priority inversion problem is
known as priority inheritance.

Thus, priority inheritance is mechanism where a low-priority task that is currently accessing a shared
resource requested by a high-priority task temporarily inherits the priority of that high-priority task,
from the moment high priority task raises request.

Note: Think this scenario is farfetched and unlikely to cause issues in the real world? Think again. If
you do a quick search on the Mars Pathfinder, you'll discover how priority inversion nearly doomed
the mission.

5.8 Inter-Process Communication

The mechanism through which processes/tasks communicate each other is known as Inter
Process/Task Communication (IPC). Inter Process Communication is essential for process
coordination. The various types of Inter Process Communication (IPC) mechanisms adopted by process
are kernel (Operating System) dependent. Some of the important IPC mechanisms adopted by various
kernels are explained below.

5.8.1 Shared Memory

Processes share some area of the memory to communicate among them (Figure 35). Information to
be communicated by the process is written to the shared memory area. Other processes which require
this information can read the same from the shared memory area.

Asst. Prof. Selvin Furtado [222]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

Figure 35- Concept of Shared Memory

The implementation of shared memory concept is kernel dependent. Different mechanisms are
adopted by different kernels for implementing this. A few among them are:

5.8.1.1 Pipes
‘Pipe’ is a section of the shared memory used by processes for communicating. Pipes follow the
client—server architecture. A process which creates a pipe is known as a pipe server and a process
which connects to a pipe is known as pipe client. A pipe can be considered as a conduit for information
flow and has two conceptual ends. It can be unidirectional, allowing information flow in one direction
or bidirectional allowing bi-directional information flow. A unidirectional pipe allows the process
connecting at one end of the pipe to write to the pipe and the process connected at the other end of
the pipe to read the data, whereas a' bi-directional pipe allows both reading and writing at one end.
The unidirectional pipe can be visualised as Figure 36.

Figure 36- Concept of Pipe for IPC

5.8.1.2 Memory Mapped Objects
Memory mapped object is a shared memory technique adopted by certain Real-Time Operating
Systems for allocating a shared block of memory which can be accessed by multiple process
simultaneously (of course certain synchronisation techniques should be applied to prevent
inconsistent results). In this approach a mapping object is created and physical storage for it is
reserved and committed. A process can map the entire committed physical area or a block of it to its
Virtual address space. All read and write operation to this Virtual address space by a process is directed
to its committed physical area. Any process which wants to share data with other processes can map
the physical memory area of the mapped object to its Virtual memory space and use it for sharing the
data.

5.8.2 Message Passing

Message passing is an (a)synchronous information exchange mechanism used for Inter
Process/Thread Communication. The major difference between shared memory and message passing
technique is that, through shared memory lots of data can be shared whereas only limited amount of
info/data is passed through message passing. Also, message passing is relatively fast and free from the

Asst. Prof. Selvin Furtado [223]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

synchronisation overheads compared to shared memory. Based on the message passing operation

between the processes, message passing is classified into.

5.8.2.1 Message Queue
Usually the process which wants to talk to another process posts the message to a First-ln-First-Out
(FIFO) queue called ‘Message queue’, which stores the messages temporarily in a system defined
memory object, to pass it to the desired process (Figure 37). Messages are sent and received through
send (Name 0ftheprocess to which the message is to be/sent/message) and receive (Name of the
process from which the message is to be received, message) methods. The messages are exchanged
through a message queue. The implementation of the message queue, send and receive methods are
OS kernel dependent.

Figure 37- Concept of message queue based indirect messaging for IPC.

5.8.2.2 Mailbox
Mailbox is an alternate form of ‘Message queues’ and it is used in certain Real-Time Operating Systems
for IPC. Mailbox technique for IPC in RTOS is usually used for one-way messaging. The task/thread
which wants to send a message to other tasks/threads creates a mailbox for posting the messages.
The threads which are interested in receiving the messages posted to the mailbox by the mailbox
creator thread can subscribe to the mailbox. The thread which creates the mailbox is known as
‘mailbox server’ and the threads which subscribe to the mailbox are known as ‘mailbox clients’.

Asst. Prof. Selvin Furtado [224]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

Figure 38- Concept of Mailbox based indirect messaging for IPC

The mailbox server posts messages to the mailbox and notifies it to the clients which are subscribed
to the mailbox. The clients read the message from the mailbox on receiving the notification. The
mailbox creation, subscription, message reading and writing are achieved through OS kernel provided
API calls. Mailbox and message queues are same in functionality. The only difference is in the number
of messages supported by them. Both of them are used for passing data in the form of message(s)
from a task to another task(s). Mailbox is used for exchanging a single message between two tasks or
between an Interrupt Service Routine (ISR) and a task. Mailbox associates a pointer pointing to the
mailbox and a wait list to hold the tasks waiting for a message to appear in the mailbox. The
implementation of mailbox is OS kernel dependent.

5.8.2.3 Signalling
Signalling is a primitive way of communication between processes/threads. Signals are used for
asynchronous notifications where one process/thread fires a signal, indicating the occurrence of a
scenario which the other process(es)/thread(s) is waiting. Signals are not queued and they do not carry
any data.

5.8.3 Remote Procedure Call (RPC) and Sockets

Remote Procedure Call or RPC (Figure 39) is the Inter Process Communication (IPC) mechanism used
by a process to call a procedure of another process running on the same CPU or on a different CPU
which is interconnected in a network. In the object-oriented language terminology RPC is also known
as Remote Invocation or Remote Method Invocation (RMI). RPC is mainly used for distributed
applications like client-server applications. With RPC it is possible to communicate over a

Asst. Prof. Selvin Furtado [225]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

heterogeneous network (i.e. Network where Client and server applications are running on different

Operating systems). The CPU/process containing the procedure which needs to be invoked remotely

is known as server. The CPU/process which initiates an RPC request is known as client.

Figure 39- Concept of Remote Procedure Call (RPC) for IPC

Sockets are used for RPC communication. Socket is a logical endpoint in a two-way communication
link between two applications running on a network. A port number is associated with a socket so that
the network layer of the communication channel can deliver the data to the designated application.
Sockets are of different types, namely, Internet sockets (INET), UNIX sockets, etc. The INET socket
works on internet communication protocol. TCP/IP, UDP, etc. are the communication protocols used
by INET sockets. INET sockets are classified into:

1. Stream sockets
2. Datagram sockets

Stream sockets are connection oriented and they use TCP to establish a reliable connection. On the
other hand, Datagram sockets rely on UDP for establishing a connection. The UDP connection is
unreliable when compared to TCP.

5.9 Interrupt management in RTOS environment

Interrupt Handling Deals with the handling of various types of interrupts. Interrupts provide Real-Time
behaviour to systems. Interrupts inform. the processor that an external device or an associated task
requires immediate attention of the CPU. Interrupts can be either Synchronous or Asynchronous.

Interrupts which occurs in sync with the currently executing task is known as Synchronous interrupts.
Usually the software interrupts fall under the Synchronous Interrupt category. Divide by zero,

Asst. Prof. Selvin Furtado [226]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

memory segmentation error, etc. are examples of synchronous interrupts. For synchronous interrupts,

the interrupt handler runs in the same context of the interrupting task.

Asynchronous interrupts are interrupts, which occurs at any point of execution of any task, and are
not in sync with the currently executing task. The interrupts generated by external devices (by
asserting the interrupt line of the processor/controller to which the interrupt line of the device is
connected) connected to the processor/controller, timer over-flow interrupts, serial data reception/
transmission interrupts, etc. are examples for asynchronous interrupts. For asynchronous interrupts,
the interrupt handler is usually written as separate task (Depends on OS kernel implementation) and
it runs in a different context. Hence, a context switch happens while handling the asynchronous
interrupts.

Priority levels can be assigned to the interrupts and each interrupt can be enabled or disabled
individually. Most of the RTOS kernel implements ‘Nested Interrupts’ architecture. Interrupt nesting
allows the pre-emption (interruption) of an Interrupt Service Routine (ISR), servicing an interrupt, by
a high priority interrupt.

5.9.1 Interrupt Service Routine (ISR)

Interrupt is a hardware signal that informs the CPU that an important event has occurred. When
interrupt occurs, CPU saves its context (contents of the registers) and jumps to the ISR. After ISR
processes the event, the CPU returns to the interrupted task in a non—pre-emptive kernel. In the case
of pre-emptive kernel, highest priority task gets executed.

In real—time operating systems, the interrupt latency, interrupt response time and the interrupt
recovery time are very important.

Interrupt Latency: The maximum time for which interrupts are disabled + time to start the execution
of the first instruction in the ISR is called interrupt latency.

Interrupt Response Time: Time between receipt of interrupt signal and starting the code that handles
the interrupt is called interrupt response time. In a pre-emptive kernel, response time = interrupt
latency + time to save CPU registers context.

Interrupt Recovery Time: Time required for CPU to return to the interrupted code/highest priority
task is called interrupt recovery time.

In non-preemptive kernel, interrupt recovery time = time to restore the CPU context + time to execute
the return instruction from the interrupted instruction.

In preemptive kernel, Interrupt recovery time = time to check whether a high priority task is ready +
time to restore CPU context or the highest priority task + time to execute the return instruction from
the interrupt instruction.

Asst. Prof. Selvin Furtado [227]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

Figure 40- Interrupt Latency, Interrupt Response Time and Interrupt Recovery Time

5.10Memory Management

Memory Management Compared to the General-Purpose Operating Systems, the memory
management function of an RTOS kernel is slightly different. In general, the memory allocation time
increases depending on the size of the block of memory needs to be allocated and the state of the
allocated memory block (initialised memory block consumes more allocation time than un-initialised
memory block). Since predictable timing and deterministic behaviour are the primary focus of an
RTOS, RTOS achieves this by compromising the effectiveness of memory allocation. RTOS makes use
of ‘block’ based memory allocation technique, instead of the usual dynamic memory allocation
techniques used by the GPOS. RTOS kernel uses blocks of fixed size of dynamic memory and the block
is allocated for a task on a need basis. The blocks are stored in a ‘Free Buffer Queue’. To achieve
predictable timing and avoid the timing overheads, most of the RTOS kernels allow tasks to access any
of the memory blocks without any memory protection. RTOS kernels assume that the whole design is
proven correct and protection is unnecessary. Some commercial RTOS kernels allow memory
protection as optional and the kernel enters a fail-safe mode when an illegal memory access occurs.

A few RTOS kernels implement Virtual Memory2 concept for memory allocation if the system supports
secondary memory storage (like HDD and FLASH memory). In the ‘block’ based memory allocation, a

2 Virtual Memory is an imaginary memory supported by certain operating systems. Virtual memory expands the
address space available to a task beyond the actual physical memory (RAM) supported by the system. Virtual
memory is implemented with the help of a Memory Management Unit (MMU) and ‘memory paging’. The
program memory for a task can be viewed as different pages and the page corresponding to a piece of code that
needs to be executed is loaded into the main physical memory (RAM). When a memory page is no longer
required, it is moved out to secondary storage memory and another page which contains the code snippet to be
executed is loaded into the main memory. This memory movement technique is known as demand paging. The
MMU handles the demand paging and converts the virtual address of a location in a page to corresponding
physical address in-the RAM.

Asst. Prof. Selvin Furtado [228]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

block of fixed memory is always allocated-for tasks on need basis and it is taken as a unit. Hence, there

will not be any memory fragmentation issues. The memory allocation can be implemented as constant

functions and thereby it consumes fixed amount of time for memory allocation. This leaves the

deterministic behaviour of the RTOS kernel untouched. The ‘block’ memory concept avoids the

garbage collection overhead also. The ‘block’ based memory allocation achieves deterministic

behaviour with the trade-of limited choice of memory chunk size and suboptimal memory usage.

5.11File Systems

File is a collection of related information. A file could be a program (source code or executable), text
files, image files, word documents, audio/video files, etc. Each of these files differ in the kind of
information they hold and the way in which the information is stored. The file operation is a useful
service provided by the OS. The file system management service of Kernel is responsible for

• The creation, deletion and alteration of files
• Creation, deletion and alteration of directories
• Saving of files in the secondary storage memory (e. g. Hard disk storage)
• Providing automatic allocation of file space based on the amount of free space available.
• Providing a flexible naming convention for the files

The various file system management operations are OS dependent. For example, the kernel of
Microsoft® DOS OS supports a specific set of file system management operations and they are not the
same as the file system operations supported by UNIX Kernel.

5.12I/O Systems

Kernel is responsible for routing the I/O requests coming from different user applications to the
appropriate I/O devices of the system. In a well-structured OS, the direct accessing of I/O devices are
not allowed and the access to. them are provided through a set of Application Programming Interfaces
(APIs) exposed by the kernel. The kernel maintains a list of all the I/O devices of the system. This list
may be available in advance, at the time of building the kernel. Some kernels, dynamically updates
the list of available devices as and when a new device is installed (e.g. Windows XP kernel keeps the
list updated when a new plug ‘n’ play USB device is attached to the system). The service ‘Device
Manager’ (Name may vary across different OS kernels) of the kernel is responsible for handling all I/O
device related operations. The kernel talks to the I/O device through a set of low-level systems calls,
which are implemented in a service, called device drivers. The device drivers are specific to a device
or a class of devices. The Device Manager is responsible for

• Loading and unloading of device drivers
• Exchanging information and the system specific control signals to and from the device

5.13Advantage and Disadvantage of RTOS

5.13.1 Advantages:

1. Maximum use of devices and system thus gives more output from all the resources
2. Time given for shifting tasks is very less
3. It Focusses on running applications and gives less importance to queue applications
4. Size of programs are small

Asst. Prof. Selvin Furtado [229]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS
5. Error free
6. Memory allocation is well managed

5.13.2 Disadvantages:

1. Only some task run at the same time
2. Sometimes the system resources are not good enough and they are costly as well
3. Complex and difficult to write algorithms are used
4. It requires specific device drivers
5. They are very less prone to switching tasks

5.14Portable Operating System Interface (POSIX)

POSIX is a standard developed by IEEE. Before POSIX was standardized every OS vendor used to give
his proprietary Application Programming interface (API) for application development. This interface is
a set of function calls to access the operating system objects and services. If you developed the
application using the API supplied by one vendor, it was not possible to port the application to another
operating system. POSIX standard addressed this problem and the API was standardized. IEEE POSIX
1003.1c—2001 standard specifies the API for portable operating system interface. IEEE POSIX 1003.13
“Standardized Application Environment Profile—POSIX Real-time Application Support" Addresses the
API for real-time embedded systems. This standard gives the various C language function calls and
library functions that need to be implemented by the Operating System (OS) vendors. The concept of
threads became popular only because of this standard in fact, threads are referred as POSIX threads.
As operating systems can be used for wide range of applications from tiny systems to very large multi-
processor-based systems, different profiles are defined in the POSIX standard. Small System POSIX
Profile is for small embedded systems that have a single process with multiple threads.

5.15 RTOS issues – selecting a Real Time Operating System

The decision of choosing an RTOS for an embedded design is very crucial. A lot of factors need to be
analysed carefully before making a decision on the selection of an RTOS. These factors can be either
functional or non—functional. The following section gives a brief introduction to the important
functional and non-functional requirements that needs to be analysed in the selection of an RTOS for
an embedded design.

5.15.1 Functional Requirements

Processor Support: It is not necessary that all RTOS’s support all kinds of processor architecture. It is
essential to ensure the processor support by the RTOS.

Memory Requirements: The OS requires ROM memory for holding the OS files and it is normally
stored in a non-volatile memory like FLASH. OS also requires working memory RAM for loading the OS
services. Since embedded systems are memory constrained, it is essential to evaluate the minimal
ROM and RAM requirements for the OS under consideration.

Real-time Capabilities: It is not mandatory that the operating system for all embedded systems need
to be Real-time and all embedded Operating systems-are ‘Real-time’ in behaviour. The task/process
scheduling policies plays an important role in the ‘Real-time’ behaviour of an OS. Analyse the real-
time capabilities of the OS under consideration and the standards met by the operating system for
real-time capabilities.

Asst. Prof. Selvin Furtado [230]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

Kernel and Interrupt Latency: The kernel of the OS may disable interrupts while executing certain

services and it may lead to interrupt latency. For an embedded system whose response requirements

are high, this latency should be minimal.

Inter Process Communication and Task Synchronisation: The implementation of Inter Process
Communication and Synchronisation is OS kernel dependent. Certain kernels may provide a bunch of
options whereas others provide very limited options. Certain kernels implement policies for avoiding
priority inversion issues in resource sharing.

Modularisation Support: Most of the operating systems provide a bunch of features. At times it may
not be necessary for an embedded product for its functioning. It is very useful if the OS supports
modularisation where in which the developer can choose the essential modules and re-compile the
OS image for functioning. Windows CE is an example for a highly modular operating system.

Support for Networking and Communication: The OS kernel may provide stack implementation and
driver support for a bunch of communication interfaces and networking. Ensure that the OS under
consideration provides support for all the interfaces required by the embedded product.

Development Language Support: Certain operating systems include the run time libraries required
for running applications written in languages like Java and C#. A Java Virtual Machine (JVM)
customised for the Operating System is essential for running java applications. Similarly, the .NET
Compact Framework (.NETCF) is required for running Microsoft® .NET applications on top of the
Operating System. The OS may include these components as built-in component, if not, check the
availability of the same from a third-party vendor for the OS under consideration.

5.15.2 Non-functional Requirements

Custom Developed or Off the Shelf: Depending on the OS requirement, it is possible to go for the
complete development (if an operating system suiting the embedded system needs or use an off the
shelf, readily available operating system, which is either a commercial product or an Open Source
product, which is in close match with the system requirements. Sometimes it may be possible to build
the required features-by customising an Open source OS. The decision on which to select is purely
dependent on the development cost, licensing fees for the OS, development time and availability of
skilled resources.

Cost: The total cost for developing or buying the OS and maintaining it in terms of commercial product
and custom build needs to be evaluated before taking a decision on the selection of OS.

Development and Debugging Tools: Availability The availability of development and debugging tools
is a critical decision-making factor in the selection of an OS for embedded design. Certain Operating
Systems may be superior in performance, but the availability of tools for supporting the development
may be limited. Explore the different tools available-for the OS under consideration.

Ease of Use: How easy it is to use a-commercial RTOS is another important feature that needs to be
considered in the RTOS selection.

After Sales: For a commercial embedded RTOS, after sales in the form of e-mail, on—call services, etc.
for bug fixes, critical patch updates and support for production issues, etc. should be analysed
thoroughly.

Asst. Prof. Selvin Furtado [231]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

5.16RTOS comparative study

There are nearly 100 real-time operating systems in the commercial market. So, shopping for a real-
time operating system is not an easy task. We will review the following operating systems:

• QNX Neutrino
• VxWorks
• MicroC/OS—II
• RTLinux

5.16.1 QNX Neutrino

QNX Neutrino is a popular real-time operating system of QNX Software Systems Limited
(www.qnx.com). It supports a number of processors such as ARM, MIPS, Power PC, SH-4, StrongARM,
x86 and Pentium. Board Support Packages and Device Driver Kit help in fast development of your
prototype. It provides an excellent Integrated Development Environment. It has support for C, C++
and Java languages and TCP/IP protocol stack.

It has support for multiple scheduling algorithms such as round-robin, FIFO etc. and the same
application can use different scheduling algorithms for different tasks. Up to 65,535 tasks are
supported and each task can have 653535 threads. Minimum time resolution is one nanosecond. Even
small embedded systems can use th1s OS as it requires 64K kernel ROM and 32K kernel RAM.

In Brief:
QNX Neutrino is a real-time operating system that supports multiple scheduling algorithms and up to
65,535 tasks. MySQL can be integrated with this OS to create embedded database applications.

5.16.2 VxWorks

Wind River’s VxWorks (www.windriver.com) is one of the most popular real—time operating systems.
This OS has been used in the Mars Pathfinder. It supports a number of processors including PowerPC,
Intel StrongARM, ARM, Hitachi SuperH, Motorola ColdFire, etc.

It supports both preemptive and round-robin scheduling algorithms. 256 priority levels can be
assigned to the tasks. It supports priority inheritance. Those who are against priority inheritance need
not use this feature, which is an option provided to the developer.

In Brief:
VxWorks is a real-time operating system that supports multiple scheduling algorithms and also priority
inheritance.

5.16.3 MicroC/OS-II

Micro—controller Operating System version 11 developed by Jean J. Labrosse (www.ucos-II.com) is a
preemptive real-time operating system which is popular for teaching RTOS concepts. It is also used
widely in many commercial applications including mission—critical applications. It is certified for use
in commercial aircraft by Federal Aviation Administration. The standard RTCA.DO—178B specifies

the requirements.

Asst. Prof. Selvin Furtado [232]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 5 - Open Source RTOS

The author of this OS summarizes its features beautifully: “source code availability, ROMable, scalable,

preemptive, portable, multitasking, deterministic, reliable, support for different platforms”.

This OS supports 64 tasks out of which eight are system tasks. Hence, the application can have up to
56 tasks. Each task is assigned a unique priority. Round—robin scheduling algorithm is not supported
by this operating system.

In Brief:
MicroC/OS-II is a real-time operating system used extensively in academic institutions for teaching
operating system concepts. It is available in source code form for non-commercial purposes. The
number of tasks can be 64 out of which eight are system tasks. Round—robin scheduling is not
supported.

5.16.4 RTLinux

FSM Labs (www.fsmlabs.com) has two editions of RTLinux—RTLinuxPro and RTLinuxFree. RTLinuxPro
is a priced-edition and RTLinuxFree is the open source release. RTLinux is a hard-real-time operating
system with support for many processors such as x86, Pentium, PowerPC, ARM, Fujitsu, MIPS and
Alpha. A footprint of 4 MB is required for RTLinux. It does not support priority inheritance.

RTLinux runs underneath the Linux operating system. The Linux OS becomes an idle task for RTLinux.
RTLinux tasks are given priority as compared to Linux tasks. Interrupts from Linux are disabled to
achieve real-time performance. This interrupt disabling is done using a layer of emulation software
between the Linux kernel and the interrupt controller hardware. The tasks, which do not have any
timing constraints, will run in the Linux kernel only. Soft real—time capability is provided by Linux in
the system. Hard real—time tasks run in the real-time kernel. Worst case time is 15 microseconds
between giving an interrupt signal and starting of the real-time handler.

MiniRTL, a tiny implementation of RTLinux, runs on 486 machines. This implementation is targeted
towards PC/104 boards.

In Brief:
RTLinux runs underneath the Linux operating system. The Linux is an idle task for RTLinux. The real-
time software running under RTLinux is given priority as compared to non-real-time threads running
under Linux. This 0S is an excellent choice for 32-bit processor based embedded systems.

Asst. Prof. Selvin Furtado [233]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards
6 Module 6 - Introduction to Embedded target boards

6.1 Raspberry Pi & Arduino

Raspberry Pi and Arduino are quite different boards. Each board has its own advantages and
disadvantages. If you want to decide between the two, then it depends on the requirement of your
project.

Arduino was invented by Massimo Benzi in Italy. Arduino was a simple hardware prototyping tool.
While raspberry pi as invented by Eben Upton at the University of Cambridge in the United Kingdom
for improving the programming skills of his students.

These both teaching tools are suitable for beginners, hobbyists. The main difference between them is
Arduino is microcontroller board while raspberry pi is a mini computer. Thus, Arduino is just a part of
raspberry pi. Raspberry Pi is good at software applications, while Arduino makes hardware projects
simple.

Figure 41- Arduino UNO

Asst. Prof. Selvin Furtado [234]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

Figure 42- Raspberry Pi 3

Below table gives you some

6.1.1 Differences between Raspberry Pi and Arduino.

SL Raspberry Pi Arduino

1 It is a mini computer with Raspbian OS. It can Arduino is a microcontroller, which is a part of

run multiple programs at a time. the computer. It runs only one program again

and again.

2 It is difficult to power using a battery pack. Arduino can be powered using a battery pack.

3 It requires complex tasks like installing It is very simple to interface sensors and other

libraries and software for interfacing sensors electronic components to Arduino.

and other components

4 It is expensive. It is available for low cost.

5 Raspberry Pi can be easily connected to the Arduino requires external hardware to connect

internet using Ethernet port and USB Wi-Fi to the internet and this hardware is addressed

dongles. properly using code.

6 Raspberry Pi did not have storage on board. Arduino can provide onboard storage.
It provides an SD card port.

7 Raspberry Pi has 4 USB ports to connect Arduino has only one USB port to connect to
different devices. the computer.

8 The processor used is from ARM family. Processor used in Arduino is from AVR family
Atmega328P

Asst. Prof. Selvin Furtado [235]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

9 This should be properly shutdown otherwise This is a just plug and play device. If power is

there is a risk of files corruption and connected it starts running the program and if

software problems. disconnected it simply stops.

10 The Recommended programming language is Arduino uses Arduino, C/C++.
Python, but C, C++, Python and ruby are pre-
installed.

These two boards run on very low power. But power interruption for raspberry pi may cause damage
to the software and applications. In case of Arduino if there is any power cut it again restarts. So
raspberry pi must be properly shutdown before disconnecting power.

Raspberry Pi comes with the fully functional operating system called Raspbian. It has all features of a
computer with a processor, memory and graphics driver. Pi can use different operating systems.
Although Linux is preferred android can also be installed. Arduino does not have any operating system.
Its firmware simply interprets the code written to it. It is very easy to execute simple code.

Input and output pins allow these boards to connect to other devices. Raspberry pi2 has 2 packs of
input/output pins while Arduino Uno has 20 pins.

Pi is faster than Arduino by 40 times in clock speed. Pi has ram 128000 times more than Arduino. So
Raspberry Pi is more powerful than Arduino.

Arduino has 32kb of storage on board. This is used for storing the code. This code decides the
functions of the Arduino. Raspberry pi does not have any onboard storage. But it provides micro SD
port.

Arduino can be expanded using external hardware like Wi-Fi, Ethernet, touchscreens, cameras etc.
These boards are called shields. These shields are easily installed for Arduino. While raspberry is self-
constrained board. Pi can also add some hats to add hardware like Touchscreen, GPS, RGB panels etc.
but does not have many options like Arduino board has.

Arduino uses Arduino IDE for developing the code. While Raspberry Pi can use Scratch, IDLE anything
that supports Linux.

6.1.2 How to decide between Raspberry Pi and Arduino

So, to decide between the two, first you should know what you want to do in your project.

• From above discussion we can understand that Arduino is good for repetitive tasks such as
opening the garage door, switching the lights on and off.

• While pi good for performing multiple tasks, driving complicated robots.

• For example, if you want to monitor the soil moisture and mail me if it is necessary to water
the plants. For this application, Arduino can be used.

• But if you want to monitor the moisture, mail me when the plants need to be watered and
check the weather report from online. If there is rain do nothing. For this application
Raspberry pi required.

Asst. Prof. Selvin Furtado [236]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

• In simple Arduino is used for beginners’ projects and some complicated projects can be easily

handled by pi.

Asst. Prof. Selvin Furtado [237]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

6.2 Intel Galileo

Figure 43- Intel Galileo

Galileo is a microcontroller board based on the Intel® Quark SoC X1000 Application Processor, a 32-
bit Intel Pentium-class system on a chip (datasheet). It’s the first board based on Intel® architecture
designed to be hardware and software pin-compatible with Arduino shields designed for the Uno R3.
Digital pins 0 to 13 (and the adjacent AREF and GND pins), Analog inputs 0 to 5, the power header,
ICSP header, and the UART port pins (0 and 1), are all in the same locations as on the Arduino Uno R3.

The Galileo board is also software compatible with the Arduino Software Development Environment
(IDE), which makes usability and introduction a snap. In addition to Arduino hardware and software
compatibility, the Galileo board has several PC industry standard I/O ports and features to expand
native usage and capabilities beyond the Arduino shield ecosystem. A full sized mini-PCI Express slot,
100Mb Ethernet port, Micro-SD slot, RS-232 serial port, USB Host port, USB Client port, and 8MByte
NOR flash come standard on the board.

6.2.1 Features of the Intel® Galileo Board

- Arduino: The Intel Galileo Board is the first Arduino board based on Intel architecture. The headers
(what you connect jumper cables to on the board) are based off the Arduino 1.0 pinout model
that's found on the Arduino Uno R3 boards. This provides the ability to use compatible shields
(modules that you can plug into headers), allowing you to extend the functionality of the board.
Like the Uno, it has 14 digital I/O pins, 6 analog inputs, a serial port, and an ICSP header for serial
programming.

- Quark: The board features an Intel® Quark SoC X1000 Application Processor, designed for the
Internet of Things. It's smaller and more power efficient than the Intel Atom® Processor, making
it great for small, low-powered projects.

- Ethernet: On the top portion of the board, right next to what looks like an audio jack labeled UART,
there is a 100 Mb Ethernet port that allows the Intel Galileo to connect to wired networks. Once
your board is connected to the Internet, anything is possible.

- Mini-PCIe: The Intel Galileo is the first Arduino Certified board that provides a mini PCI Express
(mPCIe) slot. This allows you to connect standard mPCIe modules like Wi-Fi, Bluetooth, and SIM
card adapters for cell phones.

- Real Time Clock (RTC): Synchronize data between modules using the boards-integrated Real Time
Clock. Using the Arduino Time Library, you can add timekeeping functionality to your program.

Asst. Prof. Selvin Furtado [238]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

Wireless projects can synchronize in real time using the Network Time Protocol (NTP) and Global

Positioning System (GPS) time data. To preserve time between system resets, add a coin cell

battery to your Intel Galileo Board.

- micro SD: Use the optional onboard micro SD card reader that is accessible through the Secure

Digital (SD) Library. Unlike other Arduinos, the Intel Galileo does not save sketches (programs)

between power on/off states of the board without an SD card. Using a micro SD card, you can

store up to 32 GB of data!

- Linux*: Using the Linux image for the Intel Galileo, you can access serial ports, Wi-Fi, and board

pins using programming languages like Advanced Linux Sound Architecture (ALSA), Video4Linux

(V4L2), Python, Secure Shell (SSH), Node.js, and OpenCV. Using these extra features provided by

Linux requires a micro SD card. Take advantage of the Intel Quark processing power and create

something amazing.

6.3 Difference between Intel Galileo, Raspberry Pi, Arduino Yun

Asst. Prof. Selvin Furtado [239]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

Asst. Prof. Selvin Furtado [240]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

6.4 Arduino Libraries

The Arduino environment can be extended through the use of libraries, just like most programming
platforms. Libraries provide extra functionality for use in sketches, e.g. working with hardware or
manipulating data. To use a library in a sketch, select it from Sketch > Import Library.
A number of libraries come installed with the IDE, but you can also download or create your own.
See these instructionsfor details on installing libraries. There's also a tutorial on writing your own
libraries. See the API Style Guide for information on making a good Arduino-style API for your library.

6.4.1 Standard Libraries

• EEPROM - reading and writing to "permanent" storage

• Ethernet - for connecting to the internet using the Arduino Ethernet Shield, Arduino Ethernet
Shield 2 and Arduino Leonardo ETH

• Firmata - for communicating with applications on the computer using a standard serial
protocol.

• GSM - for connecting to a GSM/GRPS network with the GSM shield.

Asst. Prof. Selvin Furtado [241]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

• LiquidCrystal - for controlling liquid crystal displays (LCDs)

• SD - for reading and writing SD cards

• Servo - for controlling servo motors

• SPI - for communicating with devices using the Serial Peripheral Interface (SPI) Bus

• SoftwareSerial - for serial communication on any digital pins. Version 1.0 and later of Arduino
incorporate Mikal Hart's NewSoftSerial library as SoftwareSerial.

• Stepper - for controlling stepper motors

• TFT - for drawing text , images, and shapes on the Arduino TFT screen

• WiFi - for connecting to the internet using the Arduino WiFi shield

• Wire - Two Wire Interface (TWI/I2C) for sending and receiving data over a net of devices or
sensors.

The Matrix and Sprite libraries are no longer part of the core distribution.

6.4.2 101 Only Libraries

• CurieBLE - Interact with smartphones and tablets with Bluetooth Low Energy (BLE).

• CurieIMU - Manage the on-board accelerometer and gyro.

• CurieTimerOne - Allows to use Timer functions.

• CurieTime - Allows to control and use the internal RTC (Real Time Clock)

6.4.3 Due Only Libraries

• Audio - Play audio files from a SD card.

6.4.4 Due, Zero and MKR1000 Libraries

• USBHost - Communicate with USB peripherals like mice and keyboards.

• Scheduler - Manage multiple non-blocking tasks.

6.4.5 Zero, MKRZERO and MKR1000 Libraries

• AudioFrequencyMeter - Sample an audio signal and get its frequency back

• AudioZero - Play audio files from a SD card

• RTC - Real Time Clock to schedule events

• ArduinoSound - A simple way to play and analyze audio data

• I2S - To use the I2S protocol on SAMD21

6.4.6 WiFi 101 and MKR1000 Library

• WiFi101 - library to be used only with Wifi shield 101

• WiFi101OTA - Over-the-air updates on MKR1000

Asst. Prof. Selvin Furtado [242]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

6.4.7 MKR WiFi 1010, MKR VIDOR 4000 and Arduino UNO WiFi Rev.2

• WiFi NINA - library to use the WiFi Nina module of the above boards.

6.4.8 MKR Motor Carrier Only Library

• MKR Motor Carrier - Library to be used with the MKR Motor Carrier

6.4.9 MKR FOX 1200 only Library

• SigFox - library to be used only with MKRFOX1200

6.4.10 MKR WAN 1300 only Library

• MKRWAN - library to be used only with MKR WAN 1300

6.4.11 MKR GSM 1400 only Library

• MKRGSM - library to be used only with MKR GSM 1400

6.4.12 MKR NB 1500 only Library

• MKRNB - library to be used only with MKR NB 1500

6.4.13 Esplora Only Library

• Esplora - this library enable you to easily access to various sensors and actuators mounted on
the Esplora board.

6.4.14 Arduino Robot Library

• Robot - this library enables easy access to the functions of the Arduino Robot.

6.4.15 Yún devices Library

• Bridge Library - Enables communication between the Linux processor and the microcontroller
on the Yún.

• Ciao Library - Aims to simplify interaction between microcontroller and Linino OS allowing a
variety of connections with most common protocols

6.4.16 USB Libraries (Leonardo, Micro, Due, Zero and Esplora)

• Keyboard - Send keystrokes to an attached computer.

• Mouse - Control cursor movement on a connected computer.

6.4.17 Contributed Libraries

If you're using one of these libraries, you need to install it first. See these instructions for details on
installation. There's also a tutorial on writing your own libraries.

6.4.17.1 Communication (networking and protocols):
• Messenger - for processing text-based messages from the computer

• NewSoftSerial - an improved version of the SoftwareSerial library

• OneWire - control devices (from Dallas Semiconductor) that use the One Wire protocol.

• PS2Keyboard - read characters from a PS2 keyboard.

• Simple Message System - send messages between Arduino and the computer

Asst. Prof. Selvin Furtado [243]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

• SSerial2Mobile - send text messages or emails using a cell phone (via AT commands over

software serial)

• Webduino - extensible web server library (for use with the Arduino Ethernet Shield)

• X10 - Sending X10 signals over AC power lines

• XBee - for communicating with XBees in API mode

• SerialControl - Remote control other Arduinos over a serial connection

6.4.17.2
Sensing:

• Capacitive Sensing - turn two or more pins into capacitive sensors

• Debounce - for reading noisy digital inputs (e.g. from buttons)

6.4.17.3
Displays and LEDs:

• GFX - base class with standard graphics routines (by Adafruit Industries)

• GLCD - graphics routines for LCD based on the KS0108 or equivalent chipset.

• Improved LCD library fixes LCD initialization bugs in official Arduino LCD library

• LedControl - for controlling LED matrices or seven-segment displays with
a MAX7221 or MAX7219.

• LedControl - an alternative to the Matrix library for driving multiple LEDs with Maxim chips.

• LedDisplay - control of a HCMS-29xx scrolling LED display.

• Matrix - Basic LED Matrix display manipulation library

• PCD8544 - for the LCD controller on Nokia 55100-like displays (by Adafruit Industries)

• Sprite - Basic image sprite manipulation library for use in animations with an LED matrix

• ST7735 - for the LCD controller on a 1.8", 128x160 TFT screen (by Adafruit Industries)

6.4.17.4
Audio and Waveforms:

• FFT - frequency analysis of audio or other analog signals

• Tone - generate audio frequency square waves in the background on any microcontroller pin

6.4.17.5
Motors and PWM:

• TLC5940 - 16 channel 12 bit PWM controller.

6.4.17.6

Timing:
• DateTime - a library for keeping track of the current date and time in software.

Asst. Prof. Selvin Furtado [244]
Dept. Electronic & Telecommunication Engg. Top↑

TE-IT: MEP Module 6 - Introduction to Embedded target boards

• Metro - help you time actions at regular intervals

• MsTimer2 - uses the timer 2 interrupt to trigger an action every N milliseconds.

Utilities:
• PString - a lightweight class for printing to buffers
• Streaming - a method to simplify print statements

Asst. Prof. Selvin Furtado [245]
Dept. Electronic & Telecommunication Engg. Top↑


Click to View FlipBook Version