1745 -
Joseph Marie Jacquard (1752-1834)
It took inventor
Joseph M. Jacquard to bring together Bouchon’s idea of a continuous punched
roll, and Falcon’s ides of durable punched cards to produce a really workable
programmable loom. Weaving operations were controlled by punched cards tied
together to form a long loop. And, you could add as many cards as you wanted.
Each time a thread was woven in, the roll was clicked forward by one card.
The results revolutionized the weaving industry and made a lot of money for
Jacquard. This idea of punched data storage was later adapted for computer
data input.
First
Generation Computers (1940s – 1950s)
First
generation computers were used vacuum tubes, and they were much huge and very
complex. The general-purpose of electronic computer was the ENIAC (Electronic
Numerical Integrator And Computer). This computer digital, although it did
not operate with the binary code, and it was reprogrammable to simplify a
complete range of computing problems. It was programmed by using the
plugboards and the switches, supporting the input from an IBM card reader,
and the output to an IBM card punch.
Second
Generation Computers (1955 – 1960)
The
second-generation computers used transistor, which then started the replacing
of vacuum tubes in computer design. These transistor computers consumed much
less power, they produced much less heat, and were much smaller than to the
first generation, but bigger than today’s standards.
The first
transistor computer was built at the University of Manchester in
1953. Most popular of the transistor computers was IBM 1401.
Third
Generation Computers (1960s)
Third generation
computer start with the invention of the integrated circuits (ICs), also
called microchips, paved the way for the computers we know them
today. Making the circuits out of the single pieces of silicon, which is a
semiconductor, allowed them to be much smaller and more practical to produce.
This may also start the under-way process of the
integrating an ever-larger number of the transistors onto the
single microchip. During the sixties microchips started making their way into
computers, but the process was gradual, and second generation of computers
still held on.
Then first
appeared minicomputers, first of the which were still based on non-microchip
transistors, and after versions of which were hybrids, being based on the
both transistors and microchips, Like IBM’s System/360. They were much
smaller, and inexpensive than the first and the second generation of
computers, also called mainframes
Fourth
Generation Computers (1971 – present)
The first microchips-based central
processing units consisted of the many microchips for the different CPU
components. The drive for the ever-greater integration and the
miniaturization guide towards the single-chip CPUs, where all of the
compulsory CPU components were put onto the single microchip, known as a
microprocessor. The first single-chip CPU, or a microprocessor, was Intel
4004.
Fifth Generation of Microcomputers (1971 – 1976)
First
microcomputers were a weird bunch. They often came in kits, and many were
essentially just boxes with lights and switches, usable only to engineers and
hobbyists whom could understand binary code. Some, however, did come with a
keyboard and/or a monitor, bearing somewhat more resemblance to modern
computers. It is arguable which of the early microcomputers could be
called a first. CTC Datapoint 2200 is one candidate, although it actually
didn’t contain a microprocessor (being based on a multi-chip CPU design
instead), and wasn’t meant to be a standalone computer, but merely a terminal
for the mainframes. The reason some might consider it a first microcomputer
is because it could be used as a de-facto standalone computer, it was small
enough, and its multi-chip CPU architecture actually became a basis for the
x86 architecture later used in IBM PC and its descendants. Plus, it even came
with a keyboard and a monitor, an exception in those days.
|
Comments
Post a Comment