Computer

From Wikipedia, the free encyclopedia.

In current usage, a computer is a device which is used to process information according to a well-defined procedure.

The word was originally used to describe people who were employed to do arithmetic calculations, with or without mechanical aids. The famed Leibniz himself complained of the time he expended in performing calculations. Starting in the 1950s computing machine was used to refer to the machines themselves; finally, the shorter word computer took over the term computing machine. Originally, computing was almost exclusively related to arithmetical problems, but modern computers are used for many tasks unrelated to mathematics, as their cost has declined, their performance has increased and their size is reduced.

Within such a definition sit mechanical devices such as the slide rule, the gamut of mechanical calculators from the abacus onwards, as well as all contemporary electronic computers. Terms better suited for such broad meanings ascribed to the word "computer" would be " information processor", "information processing system", or even "controller".

For more details, see the word "computer".

Table of contents

Definitions

However, the above definition includes many special-purpose devices that can compute only one or a limited range of functions. When considering modern computers, their most notable characteristic that distinguishes them from earlier computing devices is that, given the right programming, any computer can emulate the behaviour of any other (limited only by storage capacity and execution speed), and, indeed, it is believed that current machines can emulate any future computing devices we invent (though undoubtedly more slowly). In some sense, then, this threshold capability is a useful test for identifying "general-purpose" computers from earlier special-purpose devices. This "general-purpose" definition can be formalised into a requirement that a certain machine must be able to emulate the behaviour of a universal Turing machine. Machines meeting this definition are referred to as Turing-complete. While such machines are physically impossible as they require unlimited storage and zero crashing probability, the attribute Turing-complete is sometimes also used in a lax sense for machines that would be universal if they had more (infinite) storage and were absolutely reliable. The first such machine appeared in 1941: the program-controlled Z3 of Konrad Zuse (but its Turing-completeness was shown only much later, namely, in 1998). Other machines followed in a flurry of developments around the world. See the history of computing article for more details of this period.

Embedded computers

In the last 20 years or so, however, many household devices, notably including video game consoles but extending to mobile telephones, video cassette recorders, PDA's and myriad other household, industrial, automotive, and other electronic devices, all contain computer-like circuitry capable of meeting the above Turing-completeness requirement (with the proviso that the programming of these devices is often hardwired into a ROM chip which would need to be replaced to change the programming of the machine). These computers inside other special-purpose devices are commonly referred to as "microcontrollers" or "embedded computers". Therefore, many restrict the definition of computers to devices whose primary purpose is information processing, rather than being a part of a larger system such as a telephone, microwave oven, or aircraft (but see: avionics), and can be adapted for a variety of purposes by the user without physical modification. Mainframe computers, minicomputers, and personal computers are the main types of computers meeting this definition.

Personal computers

Finally, many people who are unfamiliar with other forms of computers use the term exclusively to refer to personal computers (PCs).

How computers work

While the technologies used in digital computers have changed dramatically since the first computers of the 1940s (see History of computing hardware for more details), most still use the von Neumann architecture proposed in the early 1940s by John von Neumann.

Von Neumann's architecture describes a computer with four main sections: the Arithmetic and Logic Unit (ALU), the control circuitry, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by a bundle of wires, a "bus."

Memory

In this system, memory is a sequence of numbered cells, each containing a small piece of information. The information may be an instruction to tell the computer what to do. The cell may contain data that the computer needs to perform the instruction. Any cell may contain either, and indeed what is at one time data might be instructions later.

In general, the contents of a memory cell can be changed at any time - it is a scratchpad rather than a stone tablet.

The size of each cell, and the number of cells, varies greatly from computer to computer, and the technologies used to implement memory have varied greatly - from electromechanical relays, to mercury-filled tubes (and later springs) in which acoustic pulses were formed, to matrices of permanent magnets, to individual transistors, to integrated circuits with millions of capacitors on a single chip.

Processing

The arithmetic and logical unit, or ALU, is the device that performs elementary operations such as arithmetic operations (addition, subtraction, and so on), logical operations (AND, OR, NOT), and comparison operations (for example, comparing the contents of two bytes for equality). This unit is where the "real work" is done.

The control unit keeps track of which bytes in memory contain the current instruction that the computer is performing, telling the ALU what operation to perform and retrieving the information (from memory) that it needs to perform it, and transfers the result back to the appropriate memory location. Once that occurs, the control unit goes to the next instruction (typically located in the next slot (memory address), unless the instruction is a jump instruction informing the computer that the next instruction is located in another location).

Input and output

The I/O allows the computer to obtain information from the outside world, and send the results of its work back there. There is an incredibly broad range of I/O devices, from the familiar keyboards, monitors and floppy disk drives, to the more unusual such as webcams.

What all input devices have in common is that they encode (convert) information of some type into data which can further be processed by the digital computer system. Output devices on the other hand, decode the data into information which can be understood by the computer user. In this sense, a digital computer system is an example of a data processing system.

Instructions

The instructions discussed above are not the rich instructions of a human language. A computer only has a limited number of well-defined, simple instructions. Typical sorts of instructions supported by most computers are "copy the contents of cell 123, and place the copy in cell 456", "add the contents of cell 666 to cell 042, and place the result in cell 013", and "if the contents of cell 999 are 0, your next instruction is at cell 345".

Instructions are represented within the computer as numbers - the code for "copy" might be 001, for example. The particular instruction set that a specific computer supports is known as that computer's machine language. In practice, people do not normally write the instructions for computers directly in machine language but rather use a "high level" programming language which is then translated into the machine language automatically by special computer programs (interpreters and compilers). Some programming languages map very closely to the machine language, such as assembler (low level languages); at the other end, languages like Prolog are based on abstract principles far removed from the details of the machine's actual operation (high level languages).

Architecture

Contemporary computers put the ALU and control unit into a single integrated circuit known as the Central Processing Unit or CPU. Typically, the computer's memory is located on a few small integrated circuits near the CPU. The overwhelming majority of the computer's mass is either ancillary systems (for instance, to supply electrical power) or I/O devices.

Some larger computers differ from the above model in one major respect - they have multiple CPUs and control units working simultaneously. Additionally, a few computers, used mainly for research purposes and scientific computing, have differed significantly from the above model, but they have found little commercial application.

The functioning of a computer is therefore in principle quite straightforward. The computer fetches instructions and data from its memory. The instructions are executed, the results are stored, and the next instruction is fetched. This procedure repeats until the computer is turned off.

Programs

Computer programs are simply large lists of instructions for the computer to execute, perhaps with tables of data. Many computer programs contain millions of instructions, and many of those instructions are executed repeatedly. A typical modern PC (in the year 2003) can execute around 2-3 billion instructions per second. Computers do not gain their extraordinary capabilities through the ability to execute complex instructions. Rather, they do millions of simple instructions arranged by clever people, "programmers." Good programmers develop sets of instructions to do common tasks (for instance, draw a dot on screen) and then make those sets of instructions available to other programmers.

Nowadays, most computers appear to execute several programs at the same time. This is usually referred to as multitasking. In reality, the CPU executes instructions from one program, then after a short period of time, it switches to a second program and executes some of its instructions. This small interval of time is often referred to as a time slice. This creates the illusion of multiple programs being executed simultaneously by sharing the CPU's time between the programs. This is similar to how a movie is simply a rapid succession of still frames. The operating system is the program that usually controls this time sharing.

Operating system

A computer always needs at least one program running at all times to operate. Under normal operation this programme is the operating system (OS). The operating system decides which programs are run, when, and what resources (such as memory or I/O) they get to use. The operating system also provides a layer of abstraction over the hardware, and gives access by providing services to other programs, such as code ("drivers") which allow programmers to write programs for a machine without needing to know the intimate details of all attached electronic devices.

Uses of computers

The first digital computers, with their large size and cost, mainly performed scientific calculation. ENIAC, an early US computer originally designed to calculate ballistics firing tables for artillery, calculated neutron cross-sectional densities to see if the hydrogen bomb would work properly (this calculation, performed in December 1945 through January 1946 and involving over a million punch cards of data, showed the design then under consideration would fail). (Interestingly, many of the most powerful supercomputers available today are also used for nuclear weapons simulations.) The CSIR Mk I, the first Australian computer, evaluated rainfall patterns for the catchment of the Snowy Mountains Scheme, a large hydroelectric generation project. Others were used in cryptanalysis, for example the world's first programmable (though not turing-complete) digital electronic computer, Colossus, built during World War II. However, early visionaries also anticipated that programming would allow chess playing, moving pictures and other uses.

People in governments and large corporations also used computers to automate many of the data collection and processing tasks previously performed by humans - for example, maintaining and updating accounts and inventories. In academia, scientists of all sorts began to use computers for their own analyses. Continual reductions in the costs of computers saw them adopted by ever-smaller organizations. Businesses, organizations, and governments often employ a large number of small computers to accomplish tasks that were previously done by an expensive, large mainframe computer. Collections of the smaller computers in one location is referred to as a server farm.

With the invention of the microprocessor in the 1970s, it became possible to produce very inexpensive computers. Personal computers became popular for many tasks, including keeping books, writing and printing documents. Calculating forecasts and other repetitive math with spreadsheets, communicating with e-mail and, the Internet. However, computers' wide availability and easy customization has seen them used for many other purposes.

At the same time, small computers, usually with fixed programming, began to find their way into other devices such as home appliances, automobiles, aeroplanes, and industrial equipment. These embedded processors controlled the behaviour of such devices more easily, allowing more complex control behaviours (for instance, the development of anti-lock brakes in cars). By the start of the twenty-first century, most electrical devices, most forms of powered transport, and most factory production lines are controlled by computers. Most engineers predict that this trend will continue.

The word "computer"

Over the years there has been several slightly different meanings to the word computer, and several different words for the thing we now usually call a computer.

For instance "computer" was once commonly used to mean a person employed to do arithmetic calculations, with or without mechanical aids. According to the Barnhart Concise Dictionary of Etymology, the word came into use in English in 1646 as a word for a "person who computes" and then by 1897 also for a mechanical calculating machine. During World War II it referred to U.S. and British servicewomen whose job it was to calculate the trajectories of large artillery shells with such machines.

Charles Babbage designed one of the first computing machines called the Analytical engine, but due to technological problems it was not built in his lifetime (a working model was constructed in 1993). Various simple mechanical devices such as the slide rule and abacus have also been called computers. In some cases they were referred to as "analog computers", as they represented numbers by continuous physical quantities rather than by discrete binary digits. What are now called simply "computers" were once commonly called "digital computers" to distinguish them from these other devices (which are still used in the field of analog signal processing, for example).

In thinking of other words for the computer, it is worth noting that in other languages the word chosen does not always have the same literal meaning as the English language word. In French for example, the word is "ordinateur", which means approximately "organizer", or "sorting machine". The Spanish word is "ordenador" , with the same meaning, although in some countries they use the anglicism computadora. In Portuguese, it assumes the form computador from the verb computar, which means "to compute", "to calculate". In Italian, computer is "calcolatore", calculator, emphasizing its computational uses over logical ones like sorting. In Swedish, a computer is called "dator" from "data". At least in the 1950s, they were called "matematikmaskin" ("mathematics machine"). In Finnish computer is called "tietokone" which means "information machine". The Icelandic language's name for a computer is more poetic, their word "tölva", a portmanteau meaning "number prophetess". In Chinese, a computer is called "dian nao" or an "electric brain". In English, other words and phrases have been used, such as "data processing machine".

Computer types

See also

External links

Personal tools