The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

Lecture Notes - Computer Architecture & Organization ( An Augmented Reality Experienced)

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by mohdrozaimin, 2021-01-25 21:26:03

Lecture Notes - Computer Architecture & Organization

Lecture Notes - Computer Architecture & Organization ( An Augmented Reality Experienced)

COMPUTER ARCHITECTURE & ORGANIZATION

LECTURE NOTES

COMPUTER
ARCHITECTURE &

ORGANIZATION

PREPARED BY
DARNI BINTI DARMIN
AZHANI BINTI HASHIM

2



COMPUTER ARCHITECTURE & ORGANIZATION

COMPUTER
ARCHITECTURE &
ORGANIZATION

3





COMPUTER ARCHITECTURE & ORGANIZATION

4

COMPUTER ARCHITECTURE & ORGANIZATION

CONTENT

1.0 COMPUTER SYSTEM
1.1 Computer generation
1.2 Computer architecture vs organization
1.3 Performance computer
1.4 Hardware,software and firmware
1.5 Basic component and programming of a digital computer
1.6 Complex instruction ser computer (cisc ) vs reduced instruction set
Computer (risc)
1.7 Programming language
2.0 BASIC OF COMPUTER ARCHITECTURE AND ARITHMETIC
2.1 Von neuman architetcure
2.2 The computer architecture
3.0 ARITHMETIC LOGIC UNIT(ALU)
3.1 Alrithmetic operation
3.2 The function of register.
3.3 Adder
3.4 Magnittude comparator

5

COMPUTER ARCHITECTURE & ORGANIZATION

TOPIC 1

Computer System

A computer is a device that accepts information (digital) and manipulates it based
on a program. Complex computers also include storing data and program for some
necessary duration. A program built into the microprocessors or d ifferent programs
may be provided to the computer by loaded into its storage and then started by an
administrator or user

1.1 COMPUTER GENEREATIONS

Generations of computers explain the history of computers based on
evolving technologies. With each new generation, computer circuitry, size, and
parts have been miniaturized, the processing and speed doubled, memory got
larger, and usability and reliability improved. Nowadays, generation includes both hard-
ware and software, which together make up an entire computer system. There are five
computer generations known till date. Each generation has been discussed in detail
along with their time period and characteristics. In the following table, approximate
dates against each generation has been mentioned, which are normally accepted.

1

COMPUTER ARCHITECTURE & ORGANIZATION

1.1.1 1940 – 1956: First Generation – Vacuum Tubes

These early computers used vacuum tubes as circuitry and magnetic drums for memory. As a
result they were enormous, literally taking up entire rooms and costing a fortune to run. These
were inefficient materials which generated a lot of heat, sucked huge electricity and subsequently
generated a lot of heat which caused ongoing breakdowns.
These first generation computers relied on ‘machine language’ (which is the most basic
programming language that can be understood by computers). These computers were limited to
solving one problem at a time. Input was based on punched cards and paper tape. Output came
out on print-outs. The two notable machines of this era were the UNIVAC and ENIAC machines –
the UNIVAC is the first every commercial computer which was purchased in 1951 by a business
– the US Census Bureau.

ENIAC

1.1.2 1956 – 1963: Second Generation – Transistors

The replacement of vacuum tubes by transistors saw the advent of the second generation of
computing. Although first invented in 1947, transistors weren’t used significantly in computers
until the end of the 1950s. They were a big improvement over the vacuum tube, despite still
subjecting computers to damaging levels of heat. However they were hugely superior to the
vacuum tubes, making computers smaller, faster, cheaper and less heavy on electricity use. They
still relied on punched card for input/printouts.
The language evolved from cryptic binary language to symbolic (‘assembly’) languages.
This meant programmers could create instructions in words. About the same time high level
programming languages were being developed (early versions of COBOL and FORTRAN).
Transistor-driven machines were the first computers to store instructions into their memories –
moving from magnetic drum to magnetic core ‘technology’. The early versions of these machines
were developed for the atomic energy industry.

2

COMPUTER ARCHITECTURE & ORGANIZATION

1.1.3 1964 – 1971: Third Generation – Integrated Circuits
By this phase, transistors were now being miniaturised and put on silicon chips (called semicon-
ductors). This led to a massive increase in speed and efficiency of these machines. These were
the first computers where users interacted using keyboards and monitors which interfaced with
an operating system, a significant leap up from the punch cards and printouts. This enabled these
machines to run several applications at once using a central program which functioned to monitor
memory. As a result of these advances which again made machines cheaper and smaller, a new
mass market of users emerged during the ‘60s.developed for the atomic energy industry.

1.1.4 1972 – 2010: Fourth Generation – Microprocessors
This revolution can be summed in one word: Intel. The chip-maker developed the Intel 4004 chip
in 1971, which positioned all computer components (CPU, memory, input/output controls) onto a
single chip. What filled a room in the 1940s now fit in the palm of the hand. The Intel chip housed
thousands of integrated circuits. The year 1981 saw the first ever computer (IBM) specifically
designed for home use and 1984 saw the MacIntosh introduced by Apple. Microprocessors even
moved beyond the realm of computers and into an increasing number of everyday products.
The increased power of these small computers meant they could be linked, creating networks.
Which ultimately led to the development, birth and rapid evolution of the Internet. Other major
advances during this period have been the Graphical user interface (GUI), the mouse and more
recently the astounding advances in lap-top capability and hand-held devices.

Computer for Personal

3

COMPUTER ARCHITECTURE & ORGANIZATION

1.1.5 2010-Present : Fifth Generation – Artificial Intelligence
This revolution can be summed in one word: Intel. The chip-maker developed the Intel 4004 chip
in 1971, which positioned all computer components (CPU, memory, input/output controls) onto a
single chip. What filled a room in the 1940s now fit in the palm of the hand. The Intel chip housed
thousands of integrated circuits. The year 1981 saw the first ever computer (IBM) specifically
designed for home use and 1984 saw the MacIntosh introduced by Apple. Microprocessors even
moved beyond the realm of computers and into an increasing number of everyday products.
The increased power of these small computers meant they could be linked, creating networks.
Which ultimately led to the development, birth and rapid evolution of the Internet. Other major
advances during this period have been the Graphical user interface (GUI), the mouse and more
recently the astounding advances in lap-top capability and hand-held devices.

PROTON X-70 Voice Command

4

COMPUTER ARCHITECTURE & ORGANIZATION

1.2 COMPUTER ARCHITECTURE VS ORGANIZATION

Let’s understand what is Computer Architecture and Computer Organization

5

COMPUTER ARCHITECTURE & ORGANIZATION

1.3 PERFORMANCE COMPUTER

A computer’s speed is heavily influenced by the CPU it uses. There are several fac-
tors that affect how quickly a CPU can carry out instructions:
1.3.1 Clock Speed
The CPU can only carry out one instruction at a time. The speed at which the CPU
can carry out instructions is called the clock speed. This is controlled by a clock. With
every tick of the clock, the CPU fetches and executes one instruction. The clock speed
is measured in cycles per second, and one cycle per second is known as 1 hertz. This
means that a CPU with a clock speed of 2 gigahertz (GHz) can carry out two thousand
million (or two billion) cycles per second. The higher the clock speed a CPU has, the
faster it can process instructions.

Clock Speed Analogy

1.3.2 Cores
A CPU is traditionally made up of a processor with a single core. Most modern CPUs
have two, four or even more cores. A CPU with two cores, called a dual core processor,
is like having two processors in one. A dual core processor can fetch and execute two
instructions in the same time it takes a single core processor to fetch and execute just
one instruction. A quad core processor has four cores and can carry out even more
instructions in the same period of time.

6

COMPUTER ARCHITECTURE & ORGANIZATION

Single Core and Quad Core Analogy
1.3.3 Cache
A cache is a tiny block of memory built right onto the processor. The most commonly
used instructions and data are stored in the cache so that they are close at hand. The
bigger the cache is, the more quickly the commonly used instructions and data can be
brought into the processor and used.
1.3.4 Instructions per second
When measuring a CPU, many experts attempt to read the millions of instructions per
second, or MIPS. Million Instructions Per Second is a measure of the execution speed of
the computer. The measure approximately provides the number of machine instructions that
could be executed in a second by a computer. For example, a program that executes 3
million instructions in 2 seconds has a MIPS rating of 1.5. Instructions per second is easy to
understand and measure but may not reflect actual performance since we have difference
types of instruction from simple to complex.

7

COMPUTER ARCHITECTURE & ORGANIZATION

1.4 HARDWARE, SOFTWARE AND FIRMWARE

Hardware or computer hardware is a collection of physical part or component that
constitutes a computer system such as monitor, mouse, keyboard, storage, graphic
card, sound card, memory, motherboard and etc.

Hardware
Software or computer software is any set of instructions that directs a computer to
perform specific operations.

Software

8

COMPUTER ARCHITECTURE & ORGANIZATION

Firmware or BIOS system is a type of software that provides control, monitoring and
data manipulation of systems. Example devices contain firmware are embedded
system (traffic light, consumer appliances, digital watch), computer, mobile phones,
digital camera. Kept in non-volatile memory (ROM, EPROM/Flash memory), BIOS
(Basic Input/Output System) is program to started computer system after turn on. It
also manage data flow between operating system and hard disk, keyboard, mouse

Firmware

1.5 BASIC COMPONENTS AND PROGRAMMING OF A DIGITAL COMPUTER

1.5.1 Input/Output
An input/output (I/O) device is a hardware device that has the ability to accept input
and send output or other processed data. It also can acquire respective media data as
input sent to a computer or send computer data to storage media as storage output.
Input device is any hardware device that sends data to a computer. Meanwhile, output
device is any peripheral that receives data from a computer for display, projection or
physical reproduction.

9

COMPUTER ARCHITECTURE & ORGANIZATION

Exercise 1
i. Identify 4 input devices
ii. Identify 4 output devices

10

COMPUTER ARCHITECTURE & ORGANIZATION

1.5.2 Central Processing Unit (CPU)
Central processing unit or microprocessor is a digital integrated circuit that can be
programmed with a series of instructions to perform various operations on data. It can
do arithmetic and logic operations, move data and make decision based on certain
instructions.

Central Processing Unit(CPU)

1.5.2 Central Processing Unit (CPU)
Central processing unit or microprocessor is a digital integrated circuit that can be
programmed with a series of instructions to perform various operations on data. It can
do arithmetic and logic operations, move data and make decision based on certain
instructions.
CPU contains three basic parts:
i. Arithmetic and Logic Unit
ii. Control Unit
iii. Register (Accumulator)

11

COMPUTER ARCHITECTURE & ORGANIZATION

1.5.3 Memory
A memory is just like a human brain. It is used to store data and instructions. A
computer memory is a storage space in the computer, where data is to be processed
and instructions required for processing are stored. The memory is divided into large
number of small parts called cells. Each location or cell has a unique address, which
varies from zero to memory size minus one. For example, if the computer has 64k
words, then this memory unit has 64 X 1024 = 65536 memory locations. The address
of these locations varies from 0 to 65535.
Memory is primarily of three types :
i. Cache Memory
ii. Primary Memory (Main Memory)
iii. Secondary Memory (Storage Memory)

Random Access Memory (RAM) as Main Memory

1.5.4 Buses
Bus system is a pathway composed of cables and connectors used to carry data
between a computer microprocessor and the main memory. The bus provides
a communication path for the data and control signals moving between the major
components of the computer system.

12

COMPUTER ARCHITECTURE & ORGANIZATION

Bus System

The bus system works by combining the functions of the three main buses: the data,
address and control buses. Each of the three buses has its separate characteristics
and responsibilities.
i. Data bus transfer actual data.
ii. Address bus transfer address where the data should go.
iii. Control bus transfer necessary control signals that specify how the information
transfer is to take place. It also carry signals that report that status of various
devices.

13

COMPUTER ARCHITECTURE & ORGANIZATION

1.6 COMPLEX INSTRUCTION SET COMPUTER (CISC) VS

REDUCED INSTRUCTION SET COMPUTER (RISC)

The main idea behind Reduced Set Instruction Set Architecture (RISC) is to make
hardware simpler by using an instruction set composed of a few basic steps for
loading, evaluating and storing operations just like a load command will load
data, store command will store the data. For Complex Instruction Set Architecture
(CISC), the main idea is to make hardware complex as a single instruction will do
all loading, evaluating and storing operations just like a multiplication command
will do stuff like loading data, evaluating and storing it. Both approaches were
aiming to increase the CPU performance.
i. RISC: Reduce the cycles per instruction at the cost of the number of
instructions per program.
ii. CISC: The CISC approach attempts to minimize the number of instructions per
program but at the cost of increase in number of cycles per instruction.

14

COMPUTER ARCHITECTURE & ORGANIZATION

1.7 PROGRAMMING LANGUAGE

Program is an algorithm written for a computer in a special programming language.
Programming Languages is an artificial language that can be used to control the
behavior of a machine / computer.
Types of Programming Languages:
i. Machine Languages

Instructions and data directly executed by a CPU system. Instructions are in
form of 0 and 1 with different patterns for different instructions. It is hard to be
understand by the human, so it is very difficult for human to read and write. It is
only understand by the computer.
ii. Assembly Languages
Instruction is in a symbolic called MNEMONIC. Example : MOV, ADD, SUB. This
symbol representation of the numeric machine codes. Examples of assembly
languanges: MPASM, 68K ASM and 8085 ASM. Its need utility program called
ASSEMBLER. This assembler translate mnemonic to machine code. Each
computer/microprocessor has its own assembly language. Require much time
and effort to understand and learn different assembly language.

Example of 68K Assembly La

15

COMPUTER ARCHITECTURE & ORGANIZATION

iii.High-Level Languages.
A programming language that more abstract,
easier to use, more portable across OS
platform. It enables a programmer to write
programs that are independent of a particular
type of computer and closer to human
language . Examples : C, C++, VISUAL
BASIC, SQL and PASCAL. Programs written
in high-level language must be translated
into machine language by a compiler or

Example of C++ Programming Language

The nature of computer language is machine
language. The Assembly and High-Level
Language are developed to be more friendly
and useful to programmers.

16

COMPUTER ARCHITECTURE & ORGANIZATION

TOPIC 2

BASIC OF COMPUTER ARCHITECTURE

AND ARITHMETIC

2.1 VON NEUMANN ARCHITECTURE

In the 1940s, a mathematician called John Von Neumann described the basic
arrangement (or architecture) of a computer. Most computers today follow the concept
that he described although there are other types of architecture.

Von Neumann Architecture

2.1.1 Control Unit

This unit controls the operations of all parts of computer but does not carry out any
actual data processing operations. The function of this unit are:

i. It is responsible for controlling the transfer of data and instructions
among other units of a computer.

ii. It manages and coordinates all the units of the computer.

iii. It obtains the instructions from the memory, interprets them, and directs
the operation of the computer.

17

COMPUTER ARCHITECTURE & ORGANIZATION

iv. It communicates with Input/Output devices for transfer of data or results
from storage.
v. It does not process or store data.

2.1.2 Memory
Computer memory is the storage space in computer where data is to be processed
and instructions required for processing are stored. The memory is divided into large
number of small parts called cells. Each location or cell has a unique address, which
varies from zero to memory size minus one. For example, if the computer has 64K
words, then this memory unit has 64 * 1024 = 65536 memory locations. The address
of these locations varies from 0 to 65535.
Memory is primarily of three types:
i. Cache Memory

Cache memory is a very high-speed semiconductor memory which can
speed up the CPU. It acts as a buffer between the CPU and the main
memory. It is used to hold those parts of data and program which are
most frequently used by the CPU. The parts of data and programs are
transferred from the disk to cache memory by the operating system, from
where the CPU can access them.

Cache Memory

18

COMPUTER ARCHITECTURE & ORGANIZATION

Advantages:

i. Cache memory is faster than main memory.
ii. It consumes less access time as compared to main memory.
iii. It stores the program that can be executed within a short period of time.
iv. It stores data for temporary use.

Disadvantages:

i. Cache memory has limited capacity.
ii. It is very expensive.

ii. Primary Memory/Main Memory

Primary memory holds only those data and instructions on which the computer
is currently working. It has a limited capacity and data is lost when power is
switched off. It is generally made up of semiconductor device. These memories
are not as fast as registers. The data and instruction required to be processed
resides in the main memory. It is divided into two subcategories RAM and ROM.

The characteristics of Main Memory:

i. These are semiconductor memories.
ii. It is known as the main memory.
iii. Usually volatile memory.
iv. Data is lost in case power is switched off.
v. It is the working memory of the computer.
vi. Faster than secondary memories.
vii. A computer cannot run without the primary memory.

iii. Secondary Memory

This type of memory is also known as external memory or non-volatile. It is
slower than the main memory. These are used for storing data/information
permanently. CPU directly does not access these memories, instead they
are accessed via input-output routines. The contents of secondary memories
are first transferred to the main memory, and then the CPU can access it. For
example, disk, CD-ROM, DVD, etc.
The characteristics of Secondary Memory:
i. These are magnetic and optical memories.
ii. It is known as the backup memory.
iii. It is a non-volatile memory.
iv. Data is permanently stored even if power is switched off.
v. It is used for storage of data in a computer.
vi. Slower than primary memories.

19

COMPUTER ARCHITECTURE & ORGANIZATION

The Examples of Secondary Memory
The hierarchy of the computer memory is shown as below:

The Hierarchy of Computer Memory

20

COMPUTER ARCHITECTURE & ORGANIZATION

2.1.3 Arithmetic Logic Unit (ALU)

This unit consists of two subsections namely:
i. Arithmetic section
Function of arithmetic section is to perform arithmetic operations like
addition, subtraction, multiplication and division. All complex operations
are done by making repetitive use of above operations.
ii. Logic Section
Function of logic section is to perform logic operations such as Boolean
comparisons, such as AND, OR, XOR, and NOT operations.

2.1.4 Register

Register are used to quickly accept, store, and transfer data and instructions that are
being used immediately by the CPU. There are various types of Registers those are
used for various purpose :

i. AC or Accumulator : storing result from ALU
ii. DR or Data Register : temporarily store data being transmitted to or from
a peripheral device.
iii. PC or Program Counter : hold address of the next instruction.
iv. MDR or Memory Data Register : hold data after a fetch from the
computer storage.
v. MBR or Memory Buffer Register : store data/instruction coming from the
memory or going to the memory.

2.1.5 Input and Output Unit

Before a computer can process data, some devices are require to input the data
into the computer. The device will depends on what form of data takes (text, sound,
artwork, etc.). After the computer has processed the data, other devices will be used to
produce output of the results. This output could be a display on the computer screen,
hardcopy on printed pages, or even the audio playback of music you composed on
the computer. The terms ‘input’ and ‘output’ are used both as verbs to describe the
process of entering or displaying the data, and as nouns referring to the data itself
entered into or displayed by the computer.

Therefore:
i. An input unit/device is any hardware device that sends data to a
computer.
ii. An output unit/device is any peripheral that receives data from a
computer, usually for display, projection, or physical reproduction.

21

COMPUTER ARCHITECTURE & ORGANIZATION

2.1.6 Pipeline Technique
A technique used in advanced microprocessors where the microprocessor
begins executing a second instruction before the first has been completed. That
is, several instructions are in the pipeline simultaneously, each at a different
processing stage.
The pipeline is divided into segments and each segment can execute its
operation concurrently with the other segments. When a segment completes an
operation, it passes the result to the next segment in the pipeline and fetches the
next operation from the preceding segment. The final results of each instruction

Pipeline Technique

2.1.7 Fetch-Decode-Execute Cycle
The main job of the CPU is to execute programs using the fetch-decode-exe-
cute cycle (also known as the instruction cycle). This cycle begins as soon as
we turn on a computer. To execute a program, the program code is copied from
secondary storage into the main memory. The CPU’s program counter is set to
the memory location where the first instruction in the program has been stored,
and execution begins. The program is now running.In a program, each machine
code instruction takes up a slot in the main memory. These slots (or memory
locations) each have a unique memory address. The program counter stores
the address of each instruction and tells the CPU in what order they should be
carried out.
When a program is being executed, the CPU performs the fetch-decode-exe-
cute cycle, which repeats over and over again until reaching the STOP instruc-
tion.

22

COMPUTER ARCHITECTURE & ORGANIZATION

Fetch-Decode-Execute Cycle

Summary of the fetch-decode-execute cycle:
i. The processor checks the program counter to see which instruction to
run next.
ii. The program counter gives an address value in the memory of where
the next instruction is.
iii. The processor fetches the instruction value from this memory location.
iv. Once the instruction has been fetched, it needs to be decoded and exe
cuted. For example, this could involve taking one value, putting it into
the ALU, then taking a different value from a register and adding the two
together.
v. Once this is complete, the processor goes back to the program counter
to find the next instruction.
This cycle is repeated until the program ends.

23

COMPUTER ARCHITECTURE & ORGANIZATION

2.2 THE COMPUTER ARCHITECTURE

2.2.1 Von Neumann Architecture Vs Harvard Architecture
The Von Neumann architecture are about the relationship between the hardware
that makes up a Von Neumann-based computer.
A Von Neumann-based computer is a computer that:
i. Use a single processor.
ii. Use one memory for both instructions and data.
iii. Executes programs by doing one instruction after the next in a
serial manner using a fetch-decode-execute cycle.

Von Neumann Architecture

MARK II computer was finished at Harvard University in 1947. It wasn’t so
modern as the computer from von Neumann team. But it introduced a slightly
different architecture.
Memory for data was separated from the memory for instruction. This concept
is known as the Harvard architecture.

24

COMPUTER ARCHITECTURE & ORGANIZATION

Harvard Architecture
The characteristics of both architecture are explain below:
2.2.2 The Strength And Weakness Of Von Neumann Architecture

25

COMPUTER ARCHITECTURE & ORGANIZATION

TOPIC 3

ARITHMETIC LOGIC UNIT (ALU)

3.1 ALRITHMETIC OPERATION

An arithmetic logic unit (ALU) is a digital circuit used to perform arithmetic and
logic operations. It represents the fundamental building block of the central
processing unit (CPU) of a computer.

ALU Symbol

3.1.1 Arithmetic Operation
The basic arithmetic operations for real numbers are addition, subtraction, multiplication,
and division.
3.1.1.1 Addition
• Addition is the basic operation of arithmetic. In its simplest form, addition combines
two numbers into a single number, the sum of the numbers.
• Such as 2 + 2 = 4 or 3 + 5 = 8

26

COMPUTER ARCHITECTURE & ORGANIZATION

3.1.1.2 Subtraction
• Subtraction is the inverse of addition.
• Subtraction finds the difference between two numbers, the minuend
minus the subtrahend.
• If the minuend is larger than the subtrahend, the difference is positive;
• If the subtrahend is smaller than the subtrahend, the difference is negative;
• If they are equal, the difference is 0.

3.1.1.3 Multiplication
• Multiplication is the second basic operation of arithmetic.
• Multiplication also combines two numbers into a single number, the
product.
• The two operand are called the multiplier and the multiplicand,
sometimes both simply called factors.

3.1.1.4 Division
• Division is essentially the inverse of multiplication.
• Division finds the quotient of two numbers, the dividend divided by the
divisor.
• For distinct positive numbers, if the dividend is larger than the divisor,
the quotient is greater than 1, otherwise it is less than 1 (a similar rule
applies for negative numbers).
• The quotient multiplied by the divisor always yields the dividend

3.1.2 Logic Operation
Logic operations include any operations that manipulate Boolean values. Boolean
values are either true or false. hey are named after English mathematician George
Boole, who invented Boolean algebra, and is widely considered the founder of computer
science theory. They can also be represented as 1 and 0. Normally, 1 represents true,
and 0 represents false, but it could be the other way around.

3.1.2.1 The NOT Operator
The bitwise NOT, or complement, is a unary operation that performs logical negation
on each bit, forming the ones’ complement of the given binary value. Bits that are 0
become 1, and those that are 1 become 0.

3.1.2.2 The AND Operator
A bitwise AND takes two equal-length binary representations and performs the logical
AND operation on each pair of the corresponding bits, by multiplying them. Thus, if
both bits in the compared position are 1, the bit in the resulting binary representation
is 1 (1 × 1 = 1); otherwise, the result is 0 (1 × 0 = 0 and 0 × 0 = 0).

27

COMPUTER ARCHITECTURE & ORGANIZATION

3.1.2.3 The OR Operator
A bitwise OR takes two bit patterns of equal length and performs the logical inclusive
OR operation on each pair of corresponding bits. The result in each position is 0 if both
bits are 0, while otherwise the result is 1.
3.1.2.4 The XOR Operator
A bitwise XOR takes two bit patterns of equal length and performs the logical exclusive
OR operation on each pair of corresponding bits. The result in each position is 1 if only
the first bit is 1 or only the second bit is 1, but will be 0 if both are 0 or both are 1. In
this we perform the comparison of two bits, being 1 if the two bits are different, and 0
if they are the same.

3.2 THE FUCTION OF REGISTER

ALU only perform the arithmetic and logic operation to operands. It do not has place to
hold the operands. Therefore, registers are used to store operands before operation
and result after operation. Other than that, register also used to keep the status of
operation example carry, overflow, zero and etc.
3.2.1 Types of Shift Register
There are 4 types of shift register which is Serial-in to Serial-out (SISO) , Serial-in
to Parallel-out (SIPO) , Parallel-in to Serial-out (PISO) and Parallel-in to Parallel-out
(PIPO)

Types of Register

28

COMPUTER ARCHITECTURE & ORGANIZATION

3.2.1.1 Shift Register Operation
• Serial-in to Serial-out (SISO) - the data is shifted serially “IN” and “OUT”
of the register, one bit at a time in either a left or right direction under
clock control.

SISO

• Serial-in to Parallel-out (SIPO) - the register is loaded with serial data,
one bit at a time, with the stored data being available at the output in
parallel form.

Serial-in to Parallel-out (SIPO)

• Parallel-in to Serial-out (PISO) - the parallel data is loaded into the
register simultaneously and is shifted out of the register serially one bit
at a time under clock control.

• Parallel-in to Serial-out (PISO)

Parallel-in to Parallel-out (PIPO) - the parallel data is loaded
simultaneously into the register, and transferred together to their
respective outputs by the same clock pulse.

Parallel-in to Parallel-out (PIPO)

29

COMPUTER ARCHITECTURE & ORGANIZATION

3.3 ADDER

3.3.1 Half Adder Operation
Recall the basic rules for binary addition as stated below:

0+0=0
0+1=1
1+0=1
1+1= 10
The operations are performed by a logic circuit called a half-adder.
An adder is a digital circuit that performs addition of numbers.
Half-Adder accepts two binary digits on its inputs and produces two binary digits on its
output – a sum bit and carry bit.

Block Diagram, Schematic Circuit and Truth Table of Half Adder

A Simplified Boolean Function

Schematic Circuit of Half Adder

30

COMPUTER ARCHITECTURE & ORGANIZATION

3.3.2 Full Adder Operation
The Full Adder accepts two input bits and an input carry and generates a sum output
and an output carry.

Schematic Circuit of Half Adder Schematic Circuit of Full Adder

Truth Table of Full Adder

31

COMPUTER ARCHITECTURE & ORGANIZATION

A Schematic Circuit of Full Adder

Exercise
1. Determine the sum and the output carry of a half-adder for each set of input bits:
A = 0, B = 1
A = 0, B = 0
A = 1, B = 0
A = 1, B = 1
2. A full-adder has Cin = 1. What are the sum and the output carry when
A = 1 and B = 1?
3.3.3 Parallel Binary Adder
As you can see, the full adder is capable of adding two 1-bit numbers and an input
carry. How about the idea to construct another larger bit number for adder? Two 2-bit
number? So, the solution is to add another full adder conjunction to the output bit.

32

COMPUTER ARCHITECTURE & ORGANIZATION

To add two binary numbers, a full-adder is required for each bit in the numbers. So, for
2-bit numbers, two adders are needed; and so on.

4 Bit Parallel Binary Adder

8-bit Parallel Binary Adder

8-bit Parallel Binary Adder using 4-bit Parallel Binary Adder

Exercise

1.Construct 4-bit parallel adder block diagram for adding nibble

2.Find Sum and Output Carry for the addition of the following two 4-bit numbers if the

input carry (Cin) is O.

A4A3A2A1 = 1100

B4B3B2B1 = 1100

.

33

COMPUTER ARCHITECTURE & ORGANIZATION

3.3.4 Parallel Binary Adder For Addition & Subtraction

• The operations of both addition and subtraction can be performed by a one
common binary adder. Such binary circuit can be designed by adding an XOR
gate with each full adder.
• The mode input control line M is connected with carry input of the least
significant bit of the full adder. This control line decides the type of operation,
whether addition or subtraction.
• When M= 1, the circuit is a subtractor and when M=0, the circuit becomes
adder.The XOR gate consists of two inputs to which one is connected to the B
and other to input M.
• When M = 0, B0 XOR of 0 produce B0. Then full adders add the B0 with A0
with carry input zero and hence an addition operation is performed.
• When M = 1, B0 XOR of 0 produce B0 complement and also carry input is 1.
Hence the complemented B inputs are added to A and 1 is added through the
input carry, nothing
3.3.4 MULTIPLEXER
A multiplexer, abbreviated mux, is a device that has multiple inputs and one output.
Multiplexer is a device that selects one of several analogue or digital input signals and
forwards the selected input into a single line. Therefore, it is also use as selector for
choosing operation in ALU to be execute.

34

COMPUTER ARCHITECTURE & ORGANIZATION

The Symbol of Multiplexer (2 to 1 mux) The Truth Table of Multiplexer

Multiplexers are classified into four types:

1. 2-1 multiplexer (1 select line)
2. 4-1 multiplexer (2 select lines)
3. 8-1 multiplexer (3 select lines)
4. 16-1 multiplexer (4 select lines)

Application of multiplexer in below field:

1. Communication System
2. Computer Memory
3. Telephone Network
4. Satellite Communication

Multiplexer As A Selector

35

COMPUTER ARCHITECTURE & ORGANIZATION

3.4 MAGNITUDE COMPARATOR

A magnitude digital Comparator is a combinational circuit that compares two digital or
binary numbers in order to find out whether one binary number is equal, less than or
greater than the other binary number. We logically design a circuit for which we will
have two inputs one for A and other for B and have three output terminals, one for A >
B condition, one for A = B condition and one for A < B condition.

Truth Table A Block Diagram of Comparator

A Schematic Circuit of Comparator

36

COMPUTER ARCHITECTURE & ORGANIZATION

37





COMPUTER ARCHITECTURE & ORGANIZATION

1


Click to View FlipBook Version