Basic Computer Organization | Basic Computer Instruction with Full Tutorials - by CodeTextPro

BASIC COMPUTER ORGANIZATION

In this basic computer organization tutorials, we will learn more about basic computer model and different units of computer, central processor unit [CPU], Input Unit, Output Unit, Memory Unit, Volatile Memory, Non-Volatile Memory, semiconductor memories, Secondary Memory, Example of secondary memories, generation of computer, First generation of computer, Second Generation (1956-1963), Third Generation (1964-1971), Fourth Generation (1971-Present), Fifth Generation (1980), Von Neumann Architecture, Stored Program Concept, The key features of Von Neumann architecture, Drawback, Harvard Architecture, Difference between Von Neumann and Harvard Architecture, etc.


Basic Computer Model and different units of Computer

The model of a computer can be described by four basic units in high-level abstraction. These basic units are:

  • Central Processor Unit
  • Input Unit
  • Output Unit
  • Memory Unit

cpu
Central Processor Unit (CPU), Input, Output, and Memory unit


A. Central Processor Unit [CPU]:

Central processor unit consists of two basic blocks:
  • The program control unit has a set of registers and control circuits to generate control signals.
  • The execution unit or data processing unit contains a set of registers for storing data and an Arithmetic and Logic Unit (ALU) for execution of arithmetic and logical operations.
In addition, CPU may have some additional registers for temporary storage of data.





B. Input Unit:

With the help of input unit data from outside can be supplied to the computer. Program or data is read into main storage from input device or secondary storage under the control of CPU input instruction.
Example of input devices: Keyboard, Mouse, Hard disk, Floppy disk, CD-ROM drive etc.



C. Output Unit:

With the help of output unit computer results can be provided to the user or it can be stored in storage device permanently for future use. Output data from main storage go to output device under the control of CPU output instructions.

Example of output devices: Printer, Monitor, Plotter, Hard Disk, Floppy Disk etc.




D. Memory Unit:

Memory unit is used to store the data and program. CPU can work with the information stored in memory unit. This memory unit is termed as primary memory or main memory module. These are basically semiconductor memories.

There are two types of semiconductor memories -

1. Volatile Memory: RAM (Random Access Memory).

2. Non-Volatile Memory: ROM (Read only Memory), PROM
(Programmable ROM) EPROM (Erasable PROM), EEPROM (Electrically Erasable PROM).



Secondary Memory :
There is another kind of storage device, apart from primary or main memory, which is known as secondary memory. Secondary memories are non-volatile memory and it is used for permanent storage of data and program.

Example of secondary memories:

1. Hard Disk, Floppy Disk, Magnetic Tape ------ These are
magnetic devices,
2. CD-ROM ------ is optical device
3. Thumb drive (or pen drive) ------ is semiconductor memory.




GENERATIONS OF COMPUTER

First generation of computer:

The earliest attempt to make an electronic computer using vacuum tubes appears to have been made in the late 1930s. This special purpose machine was intended for solving linear equations, but the project never completed at all. The first successful, widely known general purpose electronic computer system was electronic numerical integrator and calculator or ENIAC.



Second Generation (1956-1963):
In second generation Transistors replaced the vacuum tubes used in the first generation. The transistor was invented in 1947 but did not see widespread use in computers until the late 1950s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output. Second-generation computers moved from cryptic binary machine language to
symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry.




Third Generation (1964-1971):

The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers. Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.



Fourth Generation (1971-Present):
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer—from the central processing unit and memory to input/output controls—on a single chip. In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began
to use microprocessors. As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices.




Fifth Generation (1980):
The period of fifth-generation is the 1980-till date. In the fifth generation, VLSI technology became ULSI (Ultra Large Scale Integration) technology, resulting in the production of microprocessor chips having ten million electronic components.
This generation is based on parallel processing hardware and AI (Artificial Intelligence) software. AI is an emerging branch in computer science, which interprets the means and methods of making computers think like human beings. All the high-level languages like C and C++, Java, .Net, etc., are used in this generation.

generations of computer
generations of computer

generations of computer
generations of computer short description with list




Von Neumann Architecture

The architecture, suggested by John Von Neumann, is referred to as the Von Neumann Architecture, which has similar structural blocks as the constituent units which were suggested by Charles Babbage. Neumann identified five blocks to perform operations on the data.

Von Neumann Architecture
Von Neumann Architecture


The blocks are namely Input block, Memory block, Output block; Arithmetic and Logic Unit block (ALU) and the Control Unit block. Traditionally ALU and Control Unit block are built together. 

The functions of these two blocks are complementary to each other to the extent that they are better built together. These
two blocks together are referred to as the Central Processing Unit (CPU). The units work with the inherent philosophy of stored program concept given by Von Neumann.




Stored Program Concept:
The concept utilizes the memory to store all the instructions to be performed by the computer for a particular task prior to execution. The required data also are to be stored in memory at execution time. The CPU fetches one instruction from
memory, decodes it and executes the same. At the end of the execution of the current instruction, it fetches the next instruction and the cycle continues till the job is finished.


The key features of Von Neumann architecture are as follows:


The computer reads the instruction set from the outside world through the input device.

The memory gets them through the Arithmetic and Logic Unit (ALU) and stores them within.

The Control Unit of CPU fetches one instruction at a time from the memory to the ALU, analyses it, and fetches required data.

ALU executes the instruction, stores result back to memory if required.

To give an output, the content from ALU is given to output device.

Control Unit of CPU controls all operations. It executes instructions in sequential order unless the effect of the instruction is to change the sequence of instructions.


It can be well observed that there is a single path between memory and ALU. Each instruction and operand need to be fetched from memory. Intermediate results are also needed to be stored in memory. This path between the memory and the ALU is kept busy almost every moment. Being a single path it is very critical. This path is called the bottleneck of the Von Neumann architecture. The Von Neumann machines are termed as Institute of Advanced Systems (IAS) machine or Princeton Machine by some authors. This is because the design of the system was done in Institute of Advanced Systems (IAS) at Princeton University, USA. Very few computers have a pure Von Neumann architecture. Most computers add another step to check for interrupts, electronic events that could occur at any time. Interrupts let a computer do other things while it waits for events.




Drawback:
Von Neumann computers spend a lot of time moving data to and from the memory, and this slows the computer (this problem is called von Neumann bottleneck) so, engineers often separate the bus into two or more busses, usually one for instructions, and the other for data.




Harvard Architecture


The Harvard architecture is computer architecture with physically separate storage and signal pathways for instructions and data. The term originated from the
Harvard Mark I relay-based computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical counters. These early machines had
limited data storage, entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator, the processor could not boot itself.

Harvard Architecture
Harvard Architecture


In Harvard architecture, there is no need to make the two memories share characteristics. In particular, the word width, timing, implementation technology, and memory address structure can differ. In some systems, instructions can be stored in read-only memory while data memory generally requires read-write memory.

The Harvard architecture uses physically separate memories for their instructions and data, requiring dedicated buses for each of them. Instructions and operands can therefore be fetched simultaneously. Different program and data bus widths are possible, allowing program and data memory to be better optimized to the architectural requirements. E.g.: If the instruction format requires 14 bits then program bus and
memory can be made 14-bit wide, while the data bus and data memory remain 8-bit wide.



Difference between Von Neumann and Harvard Architecture.
difference between Harvard Architecture and Von Neumann
Von Neumann and Harvard Architecture Difference

Post a Comment

0 Comments