User:Victor sila/sandbox

A computer is a general purpose device that can be programmed to carry out a finite set of arithmetic or logical operations. Since a sequence of operations can be readily changed, the computer can solve more than one kind of problem.

Conventionally, a computer consists of at least one processing element, typically a central processing unit (CPU) and some form of memory. The processing element carries out arithmetic and logic operations, and a sequencing and control unit that can change the order of operations based on stored information. Peripheral devices allow information to be retrieved from an external source, and the result of operations saved and retrieved.

The first electronic digital computers were developed between 1940 and 1945 in the United Kingdom and United States. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[1] In this era mechanical analog computers were used for military applications.

Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into mobile devices, and mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the Information Age and are what most people think of as “computers.” However, the embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are the most numerous. Contents

1 History of computing 1.1 Limited-function early computers 1.2 First general-purpose computers 1.3 Stored-program architecture 1.4 Semiconductors and microprocessors 2 Programs 2.1 Stored program architecture"Computer program code" and "Software code" redirect here. For their source form, see source Computer hardware equals the collection of physical elements that constitute a computer system. Computer hardware refers to the physical parts or components of a computer such as monitor, keyboard, Computer data storage, hard drive disk, mouse, printers, CPU (graphic cards, sound cards, memory, motherboard and chips), etc. all of which are physical objects that you can actually touch. In contrast, software is untouchable. Software exists as ideas, application, concepts, and symbols, but it has no substance. A combination of hardware and software forms a usable computing system. See also

Computer architecture Embedded computer History of computing hardware Mainframe computer Minicomputer Personal computer Personal computer hardware Von Neumann architecture Central processing unit Computer data storage Input/output Peripheral

Media related to Computer hardware at Wikimedia Commons [hide]

v   t    e

Basic computer components Input devices

Keyboard Image scanner Microphone Pointing device Graphics tablet Joystick Light pen Mouse Pointing stick Touchpad Touchscreen Trackball Webcam Softcam

Output devices

Monitor Printer Speakers Plotter

Removable data storage

Optical disc drive CD-RW DVD+RW Disk pack Floppy disk Memory card USB flash drive

Computer case

Central processing unit (CPU) Hard disk / Solid-state drive Motherboard Network interface controller Power supply Random-access memory (RAM) Sound card Video card

Data ports

Ethernet FireWire (IEEE 1394) Parallel port Serial port Universal Serial Bus (USB)

Computer software, or just software, is any set of machine-readable instructions (most often in the form of a computer program) that directs a computer's processor to perform specific operations. The term is used to contrast with computer hardware, the physical objects (processor and related devices) that carry out the instructions. Hardware and software require each other; neither has any value without the other.

Firmware is software that has been permanently stored in hardware (specifically in non-volatile memory). It thus has qualities of both software and hardware.

Software is a general term. It can refer to all computer instructions in general or to any specific set of computer instructions. It is inclusive of both machine instructions (the binary code that the processor understands) and source code (more human-understandable instructions that must be rendered into machine code by compilers or interpreters before being executed).

On most computer platforms, software can be grouped into a few broad categories:

System software is the basic software needed for a computer to operate (most notably the operating system). Application software is all the software that uses the computer system to perform useful work beyond the operation of the computer itself. Embedded software resides as firmware within embedded systems, devices dedicated to a single use. In that context there is no clear distinction between the system and application software.

Software refers to one or more computer programs and data held in the storage of the computer. In other words, software is a set of programs, procedures, algorithms and its documentation concerned with the operation of a data processing system. Program software performs the function of the program it implements, either by directly providing instructions to the digital electronics or by serving as input to another piece of software. The term was coined to contrast to the term hardware (meaning physical devices). In contrast to hardware, software "cannot be touched".[1] Software is also sometimes used in a more narrow sense, meaning application software only. Sometimes the term includes data that has not traditionally been associated with computers, such as film, tapes, and records.[2]

Computer software is so called to distinguish it from computer hardware, which encompasses the physical interconnections and devices required to store and execute (or run) the software. At the lowest level, executable code consists of machine language instructions specific to an individual processor. A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. Programs are an ordered sequence of instructions for changing the state of the computer in a particular sequence. It is usually written in high-level programming languages that are easier and more efficient for humans to use (closer to natural language) than machine language. High-level languages are compiled or interpreted into machine language object code. Software may also be written in an assembly language, essentially, a mnemonic representation of a machine language using a natural language alphabet. Assembly language must be assembled into object code via an assembler. Contents

1 History 2 Types of software 2.1 System software 2.2 Programming software 3 Software topics 3.1 Architecture 3.2 Execution 3.3 Quality and reliability 3.4 License 3.5 Patents 4 Design and implementation 5 Industry and organizations 6 See also 7 References 8 External links

History For the history prior to 1946, see History of computing hardware. Crystal Clear app kedit.svg This section may need to be rewritten entirely to comply with Wikipedia's quality standards. You can help. The discussion page may contain suggestions. (January 2012)

The first theory about software was proposed by Alan Turing in his 1935 essay Computable numbers with an application to the Entscheidungsproblem (decision problem).[3] Colloquially, the term is often used to mean application software. In computer science and software engineering, software is all information processed by computer system, programs and data. The academic fields studying software are computer science and software engineering.

As more and more programs enter the realm of firmware, and the hardware itself becomes smaller, cheaper and faster as predicted by Moore's law, elements of computing first considered to be software, join the ranks of hardware. Most hardware companies today have more software programmers on the payroll than hardware designers[citation needed], since software tools have automated many tasks of Printed circuit board engineers. Just like the Auto industry, the Software industry has grown from a few visionaries operating out of their garage with prototypes. Steve Jobs and Bill Gates were the Henry Ford and Louis Chevrolet of their times[citation needed], who capitalized on ideas already commonly known before they started in the business. In the case of Software development, this moment is generally agreed to be the publication in the 1980s of the specifications for the IBM Personal Computer published by IBM employee Philip Don Estridge. Today his move would be seen as a type of crowd-sourcing. Computer hardware companies not only bundled their software, they also placed demands on the location of the hardware in a refrigerated space called a computer room.

Until that time, software was bundled with the hardware by Original equipment manufacturers (OEMs) such as Data General, Digital Equipment and IBM[citation needed]. When a customer bought a minicomputer, at that time the smallest computer on the market, the computer did not come with Pre-installed software, but needed to be installed by engineers employed by the OEM. Computer hardware companies not only bundled their software, they also placed demands on the location of the hardware in a refrigerated space called a computer room. Most companies had their software on the books for 0 dollars, unable to claim it as an asset (this is similar to financing of popular music in those days). When Data General introduced the Data General Nova, a company called Digidyne wanted to use its RDOS operating system on its own hardware clone. Data General refused to license their software (which was hard to do, since it was on the books as a free asset), and claimed their "bundling rights". The Supreme Court set a precedent called Digidyne v. Data General in 1985. The Supreme Court let a 9th circuit decision stand, and Data General was eventually forced into licensing the Operating System software because it was ruled that restricting the license to only DG hardware was an illegal tying arrangement.[4] Unable to sustain the loss from lawyer's fees, Data General ended up being taken over by EMC Corporation. The Supreme Court decision made it possible to value software, and also purchase Software patents.

There are many successful companies today that sell only software products, though there are still many common software licensing problems due to the complexity of designs and poor documentation, leading to patent trolls.

With open software specifications and the possibility of software licensing, new opportunities arose for software tools that then became the de facto standard, such as DOS for operating systems, but also various proprietary word processing and spreadsheet programs. In a similar growth pattern, proprietary development methods became standard Software development methodology. Types of software A layer structure showing where the operating system software and application software are situated while running on a typical desktop computer

Software includes all the various forms and roles that digitally stored data may have and play in a computer (or similar system), regardless of whether the data is used as code for a CPU, or other interpreter, or whether it represents other kinds of information. Software thus encompasses a wide array of products that may be developed using different techniques such as ordinary programming languages, scripting languages, microcode, or an FPGA configuration.

The types of software include web pages developed in languages and frameworks like HTML, PHP, Perl, JSP, ASP.NET, XML, and desktop applications like OpenOffice.org, Microsoft Word developed in languages like C, C++, Objective-C, Java, C#, or Smalltalk. Application software usually runs on an underlying software operating systems such as Linux or Microsoft Windows. Software (or firmware) is also used in video games and for the configurable parts of the logic systems of automobiles, televisions, and other consumer electronics.

Practical computer systems divide software systems into three major classes[citation needed]: system software, programming software and application software, although the distinction is arbitrary, and often blurred. System software Main article: System software

System software is computer software designed to operate the computer hardware, to provide basic functionality, and to provide a platform for running application software.[5] System software includes device drivers, operating systems, servers, utilities, and window systems.

System software is responsible for managing a variety of independent hardware components, so that they can work together harmoniously. Its purpose is to unburden the application software programmer from the often complex details of the particular computer being used, including such accessories as communications devices, printers, device readers, displays and keyboards, and also to partition the computer's resources such as memory and processor time in a safe and stable manner. Programming software Main article: Programming tool

Programming software include tools in the form of programs or applications that software developers use to create, debug, maintain, or otherwise support other programs and applications. The term usually refers to relatively simple programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined together to accomplish a task, much as one might use multiple hand tools to fix a physical object. Programming tools are intended to assist a programmer in writing computer programs, and they may be combined in an integrated development environment (IDE) to more easily manage all of these functions. Software topics Architecture See also: Software architecture

Users often see things differently than programmers. People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software.

Platform software: Platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC you will usually have the ability to change the platform software. Application software: Application software or Applications are what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other "system software" as applications. User-written software: End-user development tailors systems to meet users' specific needs. User software include spreadsheet templates and word processor templates. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user-written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co-workers.

Execution Main article: Execution (computing)

Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation – moving data, carrying out a computation, or altering the control flow of instructions.

Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly. So, this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together. Quality and reliability Main articles: Software quality, Software testing, and Software reliability

Software quality is very important, especially for commercial and system software like Microsoft Office, Microsoft Windows and Linux. If software is faulty (buggy), it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs." Many bugs are discovered and eliminated (debugged) through software testing. However, software testing rarely – if ever – eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). All major software companies, such as Microsoft, Novell and Sun Microsystems, have their own software testing departments with the specific goal of just testing. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be quite large. For instance, NASA has extremely rigorous software testing procedures for many operating systems and communication functions. Many NASA based operations interact and identify each other through command programs called software. This enables many people who work at NASA to check and evaluate functional systems overall. Programs containing command software enable hardware engineering and system operations to function much easier together. License Main article: Software license

The software's license gives the user the right to use the software in the licensed environment. Some software comes with the license when purchased off the shelf, or an OEM license when bundled with hardware. Other software comes with a free software license, granting the recipient the rights to modify and redistribute the software. Software can also be in the form of freeware or shareware. Patents Main articles: Software patent and Software patent debate

Software can be patented in some but not all countries; however, software patents can be controversial in the software industry with many people holding different views about it. The controversy over software patents is about specific algorithms or techniques that the software contains, which may not be duplicated by others and considered intellectual property and copyright infringement depending on the severity. Design and implementation A particular implementation of software. This software was created to assist the Wikipedia website. Also, it does not use a graphical user interface aside from a simple console window. Main articles: Software development, Computer programming, and Software engineering

Design and implementation of software varies depending on the complexity of the software. For instance, design and creation of Microsoft Word software will take much more time than designing and developing Microsoft Notepad because of the difference in functionalities in each one.

Software is usually designed and created (coded/written/programmed) in integrated development environments (IDE) like Eclipse, Emacs and Microsoft Visual Studio that can simplify the process and compile the program. As noted in different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) are categorized for different purposes. For instance, JavaBeans library is used for designing enterprise applications, Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. Underlying computer programming concepts like quicksort, hash table, array, and binary tree can be useful to creating software. When a program is designed, it relies on the API. For instance, if a user is designing a Microsoft Windows desktop application, he/she might use the .NET Windows Forms library to design the desktop application and call its APIs like Form1.Close and Form1.Show[6] to close or open the application and write the additional operations him/herself that it need to have. Without these APIs, the programmer needs to write these APIs him/herself. Companies like Sun Microsystems, Novell, and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.

Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods.[specify][7][8]

A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. Industry and organizations Main article: Software industry

A great variety of software companies and programmers in the world comprise a software industry. Software can be quite a profitable industry: Bill Gates, the founder of Microsoft was the richest person in the world in 2009 largely by selling the Microsoft Windows and Microsoft Office software products. The same goes for Larry Ellison, largely through his Oracle database software. Through time the software industry has become increasingly specialized.

Non-profit software organizations include the Free Software Foundation, GNU Project and Mozilla Foundation. Software standard organizations like the W3C, IETF develop software standards so that most software can interoperate through standards such as XML, HTML, HTTP or FTP.

Other well-known large software companies include Novell, SAP, Symantec, Adobe Systems, and Corel, while small companies often provide innovation. See also Portal icon 	Software portal Portal icon 	Free software portal Portal icon 	Information technology portal

List of software

References

^ "Wordreference.com: WordNet 2.0". Princeton University, Princeton, NJ. Retrieved 2007-08-19. ^ "software..(n.d.).". Dictionary.com Unabridged (v 1.1). Retrieved 2007-04-13. ^ Hally, Mike (2005). Electronic brains/Stories from the dawn of the computer age. London: British Broadcasting Corporation and Granta Books. p. 79. ISBN 1-86207-663-4. ^ "Tying Arrangements and the Computer Industry: Digidyne Corp. vs. Data General". JSTOR 1372482. ^ "What is software? - Definition from Whatis.com". Searchsoa.techtarget.com. 2012-05-13. Retrieved 2012-05-18. ^ "MSDN Library". Retrieved 2010-06-14. ^ v. Engelhardt, Sebastian (2008). "The Economic Properties of Software". Jena Economic Research Papers 2 (2008–045.). ^ Kaminsky, Dan (1999). "Why Open Source Is The Optimum Economic Paradigm for Software".

External links Wikimedia Commons has media related to: Software Look up software in Wiktionary, the free dictionary.

Software Wikia Software in Open Directory Project code. For the machine-executable code, see machine code. A computer program written in an object-oriented style.

A computer program, or just a program, is a sequence of instructions, written to perform a specified task with a computer.[1] A computer requires programs to function, typically executing the program's instructions in a central processor.[2] The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form, from which executable programs are derived (e.g., compiled), enables a programmer to study and develop its algorithms. A collection of computer programs and related data is referred to as the software.

Computer source code is often written by computer programmers. Source code is written in a programming language that usually follows one of two main paradigms: imperative or declarative programming. Source code may be converted into an executable file (sometimes called an executable program or a binary) by a compiler and later executed by a central processing unit. Alternatively, computer programs may be executed with the aid of an interpreter, or may be embedded directly into hardware.

Computer programs may be categorized along functional lines: system software and application software. Two or more computer programs may run simultaneously on one computer, a process known as multitasking. Contents

1 Programming 1.1 Paradigms 1.2 Compiling or interpreting 1.3 Self-modifying programs 2 Execution and storage 2.1 Embedded programs 2.2 Manual programming 2.3 Automatic program generation 2.4 Simultaneous execution 3 Functional categories 4 See also 5 References 6 Further reading 7 External links

Programming Main article: Computer programming

int main(void) { printf("Hello world!\n"); return 0; }
 * 1) include 

Source code of a Hello World program written in the C programming language

Computer programming is the iterative process of writing or editing source code.

Editing source code involves testing, analyzing, and refining, and sometimes coordinating with other programmers on a jointly developed program.

A person who practices this skill is referred to as a computer programmer, software developer and sometimes coder.

The sometimes lengthy process of computer programming is usually referred to as software development.

The term software engineering is becoming popular as the process is seen as an engineering discipline. Paradigms

Computer programs can be categorized by the programming language paradigm used to produce them. Two of the main paradigms are imperative and declarative.

Programs written using an imperative language specify an algorithm using declarations, expressions, and statements.[3] A declaration couples a variable name to a datatype. For example: var x: integer;. An expression yields a value. For example: 2 + 2 yields 4. Finally, a statement might assign an expression to a variable or use the value of a variable to alter the program's control flow. For example: x := 2 + 2; if x = 4 then do_something; One criticism of imperative languages is the side effect of an assignment statement on a class of variables called non-local variables.[4]

Programs written using a declarative language specify the properties that have to be met by the output. They do not specify details expressed in terms of the control flow of the executing machine but of the mathematical relations between the declared objects and their properties. Two broad categories of declarative languages are functional languages and logical languages. The principle behind functional languages (like Haskell) is to not allow side effects, which makes it easier to reason about programs like mathematical functions.[4] The principle behind logical languages (like Prolog) is to define the problem to be solved — the goal — and leave the detailed solution to the Prolog system itself.[5] The goal is defined by providing a list of subgoals. Then each subgoal is defined by further providing a list of its subgoals, etc. If a path of subgoals fails to find a solution, then that subgoal is backtracked and another path is systematically attempted.

The form in which a program is created may be textual or visual. In a visual language program, elements are graphically manipulated rather than textually specified. Compiling or interpreting

A computer program in the form of a human-readable, computer programming language is called source code. Source code may be converted into an executable image by a compiler or executed immediately with the aid of an interpreter.

Either compiled or interpreted programs might be executed in a batch process without human interaction, but interpreted programs allow a user to type commands in an interactive session. In this case the programs are the separate commands, whose execution occurs sequentially, and thus together. When a language is used to give commands to a software application (such as a shell) it is called a scripting language.

Compilers are used to translate source code from a programming language into either object code or machine code. Object code needs further processing to become machine code, and machine code is the central processing unit's native code, ready for execution. Compiled computer programs are commonly referred to as executables, binary images, or simply as binaries — a reference to the binary file format used to store the executable code.

Interpreted computer programs — in a batch or interactive session — are either decoded and then immediately executed or are decoded into some efficient intermediate representation for future execution. BASIC, Perl, and Python are examples of immediately executed computer programs. Alternatively, Java computer programs are compiled ahead of time and stored as a machine independent code called bytecode. Bytecode is then executed on request by an interpreter called a virtual machine.

The main disadvantage of interpreters is that computer programs run slower than when compiled. Interpreting code is slower than running the compiled version because the interpreter must decode each statement each time it is loaded and then perform the desired action. However, software development may be faster using an interpreter because testing is immediate when the compiling step is omitted. Another disadvantage of interpreters is that at least one must be present on the computer during computer program execution. By contrast, compiled computer programs need no compiler present during execution.

No properties of a programming language require it to be exclusively compiled or exclusively interpreted. The categorization usually reflects the most popular method of language execution. For example, BASIC is thought of as an interpreted language and C a compiled language, despite the existence of BASIC compilers and C interpreters. Some systems use just-in-time compilation (JIT) whereby sections of the source are compiled 'on the fly' and stored for subsequent executions. Self-modifying programs

A computer program in execution is normally treated as being different from the data the program operates on. However, in some cases this distinction is blurred when a computer program modifies itself. The modified computer program is subsequently executed as part of the same program. Self-modifying code is possible for programs written in machine code, assembly language, Lisp, C, COBOL, PL/1, Prolog and JavaScript (the eval feature) among others. Execution and storage

Typically, computer programs are stored in non-volatile memory until requested either directly or indirectly to be executed by the computer user. Upon such a request, the program is loaded into random access memory, by a computer program called an operating system, where it can be accessed directly by the central processor. The central processor then executes ("runs") the program, instruction by instruction, until termination. A program in execution is called a process.[6] Termination is either by normal self-termination or by error — software or hardware error. Embedded programs The microcontroller on the right of this USB flash drive is controlled with embedded firmware.

Some computer programs are embedded into hardware. A stored-program computer requires an initial computer program stored in its read-only memory to boot. The boot process is to identify and initialize all aspects of the system, from processor registers to device controllers to memory contents.[7] Following the initialization process, this initial computer program loads the operating system and sets the program counter to begin normal operations. Independent of the host computer, a hardware device might have embedded firmware to control its operation. Firmware is used when the computer program is rarely or never expected to change, or when the program must not be lost when the power is off.[8] Manual programming Switches for manual input on a Data General Nova 3

Computer programs historically were manually input to the central processor via switches. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also historically were manually input via paper tape or punched cards. After the medium was loaded, the starting address was set via switches and the execute button pressed.[9] Automatic program generation

Generative programming is a style of computer programming that creates source code through generic classes, prototypes, templates, aspects, and code generators to improve programmer productivity. Source code is generated with programming tools such as a template processor or an integrated development environment. The simplest form of source code generator is a macro processor, such as the C preprocessor, which replaces patterns in source code according to relatively simple rules.

Software engines output source code or markup code that simultaneously become the input to another computer process. Application servers are software engines that deliver applications to client computers. For example, a Wiki is an application server that lets users build dynamic content assembled from articles. Wikis generate HTML, CSS, Java, and JavaScript which are then interpreted by a web browser. Simultaneous execution See also: Process (computing) and Multiprocessing

Many operating systems support multitasking which enables many computer programs to appear to run simultaneously on one computer. Operating systems may run multiple programs through process scheduling — a software mechanism to switch the CPU among processes often so users can interact with each program while it runs.[10] Within hardware, modern day multiprocessor computers or computers with multicore processors may run multiple programs.[11]

One computer program can calculate simultaneously more than one operation using threads or separate processes. Multithreading processors are optimized to execute multiple threads efficiently. Functional categories

Computer programs may be categorized along functional lines. The main functional categories are system software and application software. System software includes the operating system which couples computer hardware with application software.[12] The purpose of the operating system is to provide an environment in which application software executes in a convenient and efficient manner.[12] In addition to the operating system, system software includes utility programs that help manage and tune the computer. If a computer program is not system software then it is application software. Application software includes middleware, which couples the system software with the user interface. Application software also includes utility programs that help users solve application problems, like the need for sorting.

Sometimes development environments for software development are seen as a functional category on its own, especially in the context of human-computer interaction and programming language design. Development environments gather system software (such as compilers and system's batch processing scripting languages) and application software (such as IDEs) for the specific purpose of helping programmers create new programs. See also

Algorithm for the relationship between computer programs and algorithms Computer software for more information on computer programs Data structure

References

^ Stair, Ralph M., et al. (2003). Principles of Information Systems, Sixth Edition. Thomson Learning, Inc. p. 132. ISBN 0-619-06489-7. ^ Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 58. ISBN 0-201-50480-4. ^ Wilson, Leslie B. (1993). Comparative Programming Languages, Second Edition. Addison-Wesley. p. 75. ISBN 0-201-56885-3. ^ a b Wilson, Leslie B. (1993). Comparative Programming Languages, Second Edition. Addison-Wesley. p. 213. ISBN 0-201-56885-3. ^ Wilson, Leslie B. (1993). Comparative Programming Languages, Second Edition. Addison-Wesley. p. 244. ISBN 0-201-56885-3. ^ Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 97. ISBN 0-201-50480-4. ^ Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 30. ISBN 0-201-50480-4. ^ Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 11. ISBN 0-13-854662-2. ^ Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 6. ISBN 0-201-50480-4. ^ Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 100. ISBN 0-201-50480-9 Check |isbn= value (help). ^ Akhter, Shameem (2006). Multi-Core Programming. Richard Bowles (Intel Press). pp. 11–13. ISBN 0-9764832-4-6. ^ a b Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 1. ISBN 0-201-50480-4.

Further reading

Knuth, Donald E. (1997). The Art of Computer Programming, Volume 1, 3rd Edition. Boston: Addison-Wesley. ISBN 0-201-89683-4. Knuth, Donald E. (1997). The Art of Computer Programming, Volume 2, 3rd Edition. Boston: Addison-Wesley. ISBN 0-201-89684-2. Knuth, Donald E. (1997). The Art of Computer Programming, Volume 3, 3rd Edition. Boston: Addison-Wesley. ISBN 0-201-89685-0.

External links

Definition of "Program" at Webopedia Definition of "Software"[dead link] at FOLDOC Definition of "Computer Program" at dictionary.com

2.2 Bugs 2.3 Machine code 2.4 Programming language 2.4.1 Low-level languages 2.4.2 Higher-level languages 2.5 Program design 3 Components 3.1 Control unit 3.2 Arithmetic logic unit (ALU) 3.3 Memory 3.4 Input/output (I/O) 3.5 Multitasking 3.6 Multiprocessing 3.7 Networking and the Internet 3.8 Computer architecture paradigms 4 Misconceptions 4.1 Required technology 5 Further topics 5.1 Artificial intelligence 5.2 Hardware 5.2.1 History of computing hardware 5.2.2 Other hardware topics 5.3 Software 5.4 Languages 5.5 Professions and organizations 6 See also 7 Notes 8 References 9 External links

History of computing The Jacquard loom, on display at the Museum of Science and Industry in Manchester, England, was one of the first programmable devices. Main article: History of computing hardware

The first use of the word “computer” was recorded in 1613 in a book called “The yong mans gleanings” by English writer Richard Braithwait I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number. It referred to a person who carried out calculations, or computations, and the word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.[3] Limited-function early computers

The history of the modern computer begins with two separate technologies, automated calculation and programmability. However no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. A few devices are worth mentioning though, like some mechanical aids to computing, which were very successful and survived for centuries until the advent of the electronic calculator, like the Sumerian abacus, designed around 2500 BC[4] of which a descendant won a speed competition against a modern desk calculating machine in Japan in 1946,[5] the slide rules, invented in the 1620s, which were carried on five Apollo space missions, including to the moon[6] and arguably the astrolabe and the Antikythera mechanism, an ancient astronomical computer built by the Greeks around 80 BC.[7] The Greek mathematician Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[8] This is the essence of programmability.

Around the end of the 10th century, the French monk Gerbert d'Aurillac brought back from Spain the drawings of a machine invented by the Moors that answered either Yes or No to the questions it was asked.[9] Again in the 13th century, the monks Albertus Magnus and Roger Bacon built talking androids without any further development (Albertus Magnus complained that he had wasted forty years of his life when Thomas Aquinas, terrified by his machine, destroyed it).[10]

In 1642, the Renaissance saw the invention of the mechanical calculator,[11] a device that could perform all four arithmetic operations without relying on human intelligence.[12] The mechanical calculator was at the root of the development of computers in two separate ways. Initially, it was in trying to develop more powerful and more flexible calculators[13] that the computer was first theorized by Charles Babbage[14][15] and then developed.[16] Secondly, development of a low-cost electronic calculator, successor to the mechanical calculator, resulted in the development by Intel[17] of the first commercially available microprocessor integrated circuit. First general-purpose computers

In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability. The Most Famous Image in the Early History of Computing[18]

This portrait of Jacquard was woven in silk on a Jacquard loom and required 24,000 punched cards to create (1839). It was only produced to order. Charles Babbage owned one of these portraits; it inspired him in using perforated cards in his analytical engine.[19] The Zuse Z3, 1941, considered the world's first working programmable, fully automatic computing machine.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine.[20] Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed—nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. This machine was given to the Science museum in South Kensington in 1910. Ada Lovelace, considered to be the first computer programmer.[21]

Between 1842 and 1843, Ada Lovelace, an analyst of Charles Babbage's analytical engine, translated an article by Italian military engineer Luigi Menabrea on the engine, which she supplemented with an elaborate set of notes of her own, simply called Notes. These notes contain what is considered the first computer program – that is, an algorithm encoded for processing by a machine. Lovelace's notes are important in the early history of computers. She also developed a vision on the capability of computers to go beyond mere calculating or number-crunching while others, including Babbage himself, focused only on those capabilities.[22]

In the late 1880s, Herman Hollerith invented the recording of data on a machine-readable medium. Earlier uses of machine-readable media had been for control, not data. “After some initial trials with paper tape, he settled on punched cards...”[23] To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of ideas and technologies, that would later prove useful in the realization of practical computers, had begun to appear: Boolean algebra, the vacuum tube (thermionic valve), punched cards and tape, and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

Alan Turing is widely regarded as the father of modern computer science. In 1936, Turing provided an influential formalization of the concept of the algorithm and computation with the Turing machine, providing a blueprint for the electronic digital computer.[24] Of his role in the creation of the modern computer, Time magazine in naming Turing one of the 100 most influential people of the 20th century, states: “The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine.”[24] The ENIAC, which became operational in 1946, is considered to be the first general-purpose electronic computer. Programmers Betty Jean Jennings (left) and Fran Bilas (right) are depicted here operating the ENIAC's main control panel. EDSAC was one of the first computers to implement the stored-program (von Neumann) architecture.

The Atanasoff–Berry Computer (ABC) was the world's first electronic digital computer, albeit not programmable.[25] Atanasoff is considered to be one of the fathers of the computer.[26] Conceived in 1937 by Iowa State College physics professor John Atanasoff, and built with the assistance of graduate student Clifford Berry,[27] the machine was not programmable, being designed only to solve systems of linear equations. The computer did employ parallel computation. A 1973 court ruling in a patent dispute found that the patent for the 1946 ENIAC computer derived from the Atanasoff–Berry Computer.

The first program-controlled computer was invented by Konrad Zuse, who built the Z3, an electromechanical computing machine, in 1941.[28] The first programmable electronic computer was the Colossus, built in 1943 by Tommy Flowers.

George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the “Model K” (for “kitchen table,” on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.[29]

A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as “the first digital electronic computer” is difficult.Shannon 1940 Notable achievements include:

Konrad Zuse's electromechanical “Z machines.” The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.[30] The non-programmable Atanasoff–Berry Computer (commenced in 1937, completed in 1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact than its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements. The secret British Colossus computers (1943),[31] which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically re-programmable. It was used for breaking German wartime codes. The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.[32] The U.S. Army's Ballistic Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an architecture which required rewiring a plugboard to change its programming.

Stored-program architecture Question book-new.svg This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2012)

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the “stored-program architecture” or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of which was completed in 1948 at the University of Manchester in England, the Manchester Small-Scale Experimental Machine (SSEM or “Baby”). The Electronic Delay Storage Automatic Calculator (EDSAC), completed a year after the SSEM at Cambridge University, was the first practical, non-experimental implementation of the stored-program design and was put to use immediately for research work at the university. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two years.

Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word “computer” is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture. Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging

Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay Brusentsov conducted research on ternary computers, devices that operated on a base three numbering system of -1, 0, and 1 rather than the conventional binary numbering system upon which most computers are based. They designed the Setun, a functional ternary computer, at Moscow State University. The device was put into limited production in the Soviet Union, but supplanted by the more common binary architecture. Semiconductors and microprocessors

Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s they had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorized computer was demonstrated at the University of Manchester in 1953.[33] In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.[citation needed]

Modern smartphones are fully programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existence.[citation needed] Programs Alan Turing was an influential computer scientist.

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language.

In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. Stored program architecture Main articles: Computer program and Computer programming Replica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program computer, at the Museum of Science and Industry in Manchester, England

This section applies to most common RAM machine-based computers.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called “jump” instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that “remembers” the location it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example:

mov No. 0, sum    ; set sum to 0 mov No. 1, num    ; set num to 1 loop: add num, sum   ; add num to sum add No. 1, num    ; add 1 to num cmp num, #1000 ; compare num to 1000 ble loop       ; if num <= 1000, go back to 'loop' halt           ; end of program. stop running

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[34] Bugs Main article: Software bug The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer

Errors in computer programs are called “bugs.” They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to “hang,” becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[35]

Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term “bugs” in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[36] Machine code

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[37] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A 1970s punched card containing one line from a FORTRAN program. The card reads: “Z(1) = Y + W(1)” and is labeled “PROJ039” for identification purposes. Programming language Main article: Programming language

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. Low-level languages Main article: Low-level programming language

Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[38] Higher-level languages Main article: High-level programming language

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually “compiled” into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[39] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design Question book-new.svg This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2012)

Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Components Main articles: Central processing unit and Microprocessor

A general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires.

Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a “1”, and when off it represents a “0” (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor. Control unit Main articles: CPU design and Control unit Diagram showing how a particular MIPS architecture instruction would be decoded by the control system.

The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into a series of control signals which activate other parts of the computer.[40] Control systems in advanced computers may change the order of some instructions so as to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[41]

The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

Read the code for the next instruction from the cell indicated by the program counter. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems. Increment the program counter so it points to the next instruction. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code. Provide the necessary data to an ALU or register. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation. Write the result from the ALU back to a memory location or to a register or perhaps an output device. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as “jumps” and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. Arithmetic logic unit (ALU) Main article: Arithmetic logic unit

The ALU is capable of performing two classes of operations: arithmetic and logic.[42]

The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other (“is 64 greater than 65?”).

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful for creating complicated conditional statements and processing boolean logic.

Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously.[43] Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. Memory Main article: Computer data storage Magnetic core memory was the computer memory of choice throughout the 1960s, until it was replaced by semiconductor memory.

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered “address” and can store a single number. The computer can be instructed to “put the number 123 into the cell numbered 1357” or to “add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595.” The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

Computer main memory comes in two principal varieties: random-access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[44]

In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. Input/output (I/O) Main article: Input/output Hard disk drives are common storage devices used with computers.

I/O is the means by which a computer exchanges information with the outside world.[45] Devices that provide input or output to the computer are called peripherals.[46] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.

I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. Multitasking Main article: Computer multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[47]

One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running “at the same time,” then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed “time-sharing” since each program is allocated a “slice” of time in turn.[48]

Before the era of cheap computers, the principal use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a “time slice” until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Multiprocessing Main article: Multiprocessing Cray designed many supercomputers that used multiprocessing heavily.

Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[49] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called “embarrassingly parallel” tasks. Networking and the Internet Main articles: Computer networking and Internet Visualization of a portion of the routes on the Internet.

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre.[50]

In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET.[51] The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. “Wireless” networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments. Computer architecture paradigms

There are many types of computer architectures:

Quantum computer vs Chemical computer Scalar processor vs Vector processor Non-Uniform Memory Access (NUMA) computers Register machine vs Stack machine Harvard architecture vs von Neumann architecture Cellular architecture

The quantum computer architecture holds the most promise to revolutionize computing.[52]

Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms.

The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. Misconceptions Main articles: Human computer and Harvard Computers Women as computers in NACA High Speed Flight Station "Computer Room"

A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word “computer” is synonymous with a personal electronic computer, the modern[53] definition of a computer is literally “A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”[54] Any device which processes information qualifies as a computer, especially if the processing is purposeful. Required technology Main article: Unconventional computing

Historically, computers evolved from mechanical computers and eventually from vacuum tubes to transistors. However, conceptually computational systems as flexible as a personal computer can be built out of almost anything. For example, a computer can be made out of billiard balls (billiard ball computer); an often quoted example.[citation needed] More realistically, modern computers are made out of transistors made of photolithographed semiconductors.

There is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.

Accountancy, or accounting, is the production of information about an enterprise and the transmission of that information from people who have it to those who need it.[1] The communication is generally in the form of financial statements that show in money terms the economic resources under the control of management; the art lies in selecting the information that is relevant to the user and is representationally faithful. The principles of accountancy are applied to business entities in three divisions of practical art, named accounting, bookkeeping, and auditing.[2]

Information technology plays a vital role within accounting. Today many tedious accounting practices have been simplified with the help of computer software. Software such as enterprise resource planning (ERP) software systems help to manage value chain of companies. ERP systems provide a comprehensive, centralized, integrated source of information that companies can use to manage all major business processes, from purchasing to manufacturing to human resources. this software can replace up to 200 individual software programs that were previously used. there is also computer integrated manufacturing that allows products to be made and completely untouched by human hands and can increase production by having less errors in manufacturing process. Computers have reduced the cost of accumulating, storing, and reporting managerial accounting information and have made it possible to produce a more detailed account of all data that is entered into any given system. Computers have changed business to business interaction through e-commerce. rather than dealing with multiple companies to purchase products a business can purchase a product at a less expensive price and take out the third party and vastly reduces expenses companies once accrued. Inter-organizational information system enable suppliers and businesses to be connected at all times. When a company is low on a product the supplier will be notified and fulfill an order immediately which eliminates the need for someone to do inventory, fill out the proper documents, send them out and wait for their products. [3]

The American Institute of Certified Public Accountants (AICPA) defines accountancy as "...the art of recording, classifying, and summarizing in a significant manner and in terms of money..." transactions and events that are at least partly financial in character, and interpreting the results.[4]

Accounting is thousands of years old; the earliest accounting records, which date back more than 7,000 years, were found in Mesopotamia (Assyrians). The people of that time relied on primitive accounting methods to record the growth of crops and herds. Accounting evolved, improving over the years and advancing as business advanced.[5]

Early accounts served mainly to assist the memory of the businessperson and the audience for the account was the proprietor or record keeper alone. Cruder forms of accounting were inadequate for the problems created by a business entity involving multiple investors, so double-entry bookkeeping first emerged in northern Italy in the 14th century, where trading ventures began to require more capital than a single individual was able to invest. The development of joint-stock companies created wider audiences for accounts, as investors without firsthand knowledge of their operations relied on accounts to provide the requisite information.[6] This development resulted in a split of accounting systems for internal (i.e. management accounting) and external (i.e. financial accounting) purposes, and subsequently also in accounting and disclosure regulations and a growing need for independent attestation of external accounts by auditors.[7]

Today, accounting is called "the language of business"[8] because it is the vehicle for reporting financial information about a business entity to many different groups of people. Accounting that concentrates on reporting to people inside the business entity is called management accounting and is used to provide information to employees, managers, owner-managers and auditors. Management accounting is concerned primarily with providing a basis for making management or operating decisions. Accounting that provides information to people outside the business entity is called financial accounting and provides information to present and potential shareholders, creditors such as banks or vendors, financial analysts, economists, and government agencies. Because these users have different needs, the presentation of financial accounts is very structured and subject to many more rules than management accounting. The body of rules that governs financial accounting in a given jurisdiction is called Generally Accepted Accounting Principles, or GAAP. Other rules include International Financial Reporting Standards, or IFRS,[9] or US GAAP. A database is an organized collection of data. The data is typically organized to model relevant aspects of reality (for example, the availability of rooms in hotels), in a way that supports processes requiring this information (for example, finding a hotel with vacancies).In financial accounting, a balance sheet or statement of financial position is a summary of the financial balances of a sole proprietorship, a business partnership, a corporation or other business organization, such as an LLC or an LLP. Assets, liabilities and ownership equity are listed as of a specific date, such as the end of its financial year. A balance sheet is often described as a "snapshot of a company's financial condition".[1] Of the four basic financial statements, the balance sheet is the only statement which applies to a single point in time of a business' calendar year.

A standard company balance sheet has three parts: assets, liabilities and ownership equity. The main categories of assets are usually listed first, and typically in order of liquidity.[2] Assets are followed by the liabilities. The difference between the assets and the liabilities is known as equity or the net assets or the net worth or capital of the company and according to the accounting equation, net worth must equal assets minus liabilities.[3]

Another way to look at the same equation is that assets equals liabilities plus owner's equity. Looking at the equation in this way shows how assets were financed: either by borrowing money (liability) or by using the owner's money (owner's equity). Balance sheets are usually presented with assets in one section and liabilities and net worth in the other section with the two sections "balancing."

A business operating entirely in cash can measure its profits by withdrawing the entire bank balance at the end of the period, plus any cash in hand. However, many businesses are not paid immediately; they build up inventories of goods and they acquire buildings and equipment. In other words: businesses have assets and so they cannot, even if they want to, immediately turn these into cash at the end of each period. Often, these businesses owe money to suppliers and to tax authorities, and the proprietors do not withdraw all their original capital and profits at the end of each period. In other words businesses also have liabilities. Contents

1 Types 1.1 Personal balance sheet 1.2 US small business balance sheet 2 Public Business Entities balance sheet structure 2.1 Assets 2.2 Liabilities 2.3 Equity 3 Balance sheet substantiation 4 Sample balance sheet 5 See also 6 References

Types

A balance sheet summarizes an organization or individual's assets, equity and liabilities at a specific point in time. We have two forms of balance sheet. They are the report form and the account form. Individuals and small businesses tend to have simple balance sheets.[4] Larger businesses tend to have more complex balance sheets, and these are presented in the organization's annual report.[5] Large businesses also may prepare balance sheets for segments of their businesses.[6] A balance sheet is often presented alongside one for a different point in time (typically the previous year) for comparison.[7][8] Personal balance sheet

A personal balance sheet lists current assets such as cash in checking accounts and savings accounts, long-term assets such as common stock and real estate, current liabilities such as loan debt and mortgage debt due, or overdue, long-term liabilities such as mortgage and other loan debt. Securities and real estate values are listed at market value rather than at historical cost or cost basis. Personal net worth is the difference between an individual's total assets and total liabilities.[9] US small business balance sheet Sample Small Business Balance Sheet[10] Assets 	Liabilities and Owners' Equity Cash 	$6,600 	Liabilities Accounts Receivable 	$6,200 	Notes Payable 	$5,000 Tools and equipment 	$25,000 	Accounts Payable $25,000 Total liabilities 	$30,000 Owners' equity Capital Stock 	$7,000 Retained Earnings 	$800 Total owners' equity 	$7,800 Total 	$37,800 	Total 	$37,800

A small business balance sheet lists current assets such as cash, accounts receivable, and inventory, fixed assets such as land, buildings, and equipment, intangible assets such as patents, and liabilities such as accounts payable, accrued expenses, and long-term debt. Contingent liabilities such as warranties are noted in the footnotes to the balance sheet. The small business's equity is the difference between total assets and total liabilities.[11] Public Business Entities balance sheet structure

Guidelines for balance sheets of public business entities are given by the International Accounting Standards Board and numerous country-specific organizations/companys.

Balance sheet account names and usage depend on the organization's country and the type of organization. Government organizations do not generally follow standards established for individuals or businesses.[12][13][14][15]

If applicable to the business, summary values for the following items should be included in the balance sheet:[16] Assets are all the things the business owns, this will include property, tools, cars, etc. Assets

Current assets

Cash and cash equivalents Accounts receivable Inventories Prepaid expenses for future services that will be used within a year

Non-current assets (Fixed assets)

Property, plant and equipment Investment property, such as real estate held for investment purposes Intangible assets Financial assets (excluding investments accounted for using the equity method, accounts receivables, and cash and cash equivalents) Investments accounted for using the equity method Biological assets, which are living plants or animals. Bearer biological assets are plants or animals which bear agricultural produce for harvest, such as apple trees grown to produce apples and sheep raised to produce wool.[17]

Liabilities

See Liability (accounting)

Accounts payable Provisions for warranties or court decisions Financial liabilities (excluding provisions and accounts payable), such as promissory notes and corporate bonds Liabilities and assets for current tax Deferred tax liabilities and deferred tax assets Unearned revenue for services paid for by customers but not yet provided

Equity

The net assets shown by the balance sheet equals the third part of the balance sheet, which is known as the shareholders' equity. It comprises:

Issued capital and reserves attributable to equity holders of the parent company (controlling interest) Non-controlling interest in equity

Formally, shareholders' equity is part of the company's liabilities: they are funds "owing" to shareholders (after payment of all other liabilities); usually, however, "liabilities" is used in the more restrictive sense of liabilities excluding shareholders' equity. The balance of assets and liabilities (including shareholders' equity) is not a coincidence. Records of the values of each account in the balance sheet are maintained using a system of accounting known as double-entry bookkeeping. In this sense, shareholders' equity by construction must equal assets minus liabilities, and are a residual.

Regarding the items in equity section, the following disclosures are required:

Numbers of shares authorized, issued and fully paid, and issued but not fully paid Par value of shares Reconciliation of shares outstanding at the beginning and the end of the period Description of rights, preferences, and restrictions of shares Treasury shares, including shares held by subsidiaries and associates Shares reserved for issuance under options and contracts A description of the nature and purpose of each reserve within owners' equity

Balance sheet substantiation

Balance Sheet Substantiation is the accounting process conducted by businesses on a regular basis to confirm that the balances held in the primary accounting system of record (e.g. SAP, Oracle, other ERP system's General Ledger) are reconciled (in balance with) with the balance and transaction records held in the same or supporting sub-systems.

Balance Sheet Substantiation includes multiple processes including reconciliation (at a transactional or at a balance level) of the account, a process of review of the reconciliation and any pertinent supporting documentation and a formal certification (sign-off) of the account in a predetermined form driven by corporate policy.

Balance Sheet Substantiation is an important process that is typically carried out on a monthly, quarterly and year-end basis. The results help to drive the regulatory balance sheet reporting obligations of the organization.

Historically, Balance Sheet Substantiation has been a wholly manual process, driven by spreadsheets, email and manual monitoring and reporting. In recent years software solutions have been developed to bring a level of process automation, standardization and enhanced control to the Balance Sheet Substantiation or account certification process. These solutions are suitable for organizations with a high volume of accounts and/or personnel involved in the Balance Sheet Substantiation process and can be used to drive efficiencies, improve transparency and help to reduce risk.

Balance Sheet Substantiation is a key control process in the SOX 404 top-down risk assessment. Sample balance sheet

The following balance sheet is a very brief example prepared in accordance with IFRS. It does not show all possible kinds of assets, liabilities and equity, but it shows the most usual ones. Because it shows goodwill, it could be a consolidated balance sheet. Monetary values are not shown, summary (total) rows are missing as well.

Balance Sheet of XYZ, Ltd. As of 31 December 2009

ASSETS Current Assets Cash and Cash Equivalents Accounts Receivable (Debtors) Less : Allowances for Doubtful Accounts Inventories Prepaid Expenses Investment Securities (Held for trading) Other Current Assets Non-Current Assets (Fixed Assets) Property, Plant and Equipment (PPE) Less : Accumulated Depreciation Investment Securities (Available for sale/Held-to-maturity) Investments in Associates Intangible Assets (Patent, Copyright, Trademark, etc.) Less : Accumulated Amortization Goodwill Other Non-Current Assets, e.g. Deferred Tax Assets, Lease Receivable

LIABILITIES and SHAREHOLDERS' EQUITY LIABILITIES Current Liabilities (Creditors: amounts falling due within one year) Accounts Payable Current Income Tax Payable Current portion of Loans Payable Short-term Provisions Other Current Liabilities, e.g. Unearned Revenue, Deposits

Non-Current Liabilities (Creditors: amounts falling due after more than one year) Loans Payable Issued Debt Securities, e.g. Notes/Bonds Payable Deferred Tax Liabilities Provisions, e.g. Pension Obligations Other Non-Current Liabilities, e.g. Lease Obligations SHAREHOLDERS' EQUITY Paid-in Capital Share Capital (Ordinary Shares, Preference Shares) Share Premium Less: Treasury Shares Retained Earnings Revaluation Reserve Accumulated Other Comprehensive Income

Non-Controlling Interest

See also

Balance sheet substantiation Cash flow statement Income statement Minority interest Model audit

National accounts Off-balance-sheet Reformatted balance sheet Sheet Statement of retained earnings (statement of changes in equity)

Database management systems (DBMSs) are specially designed applications that interact with the user, other applications, and the database itself to capture and analyze data. A general-purpose database management system (DBMS) is a software system designed to allow the definition, creation, querying, update, and administration of databases. Well-known DBMSs include MySQL, PostgreSQL, SQLite, Microsoft SQL Server, Microsoft Access, Oracle, SAP, dBASE, FoxPro, and IBM DB2. A database is not generally portable across different DBMS, but different DBMSs can inter-operate by using standards such as SQL and ODBC or JDBC to allow a single application to work with more than one database. Contents

1 Terminology and overview 2 Applications and roles 2.1 General-purpose and special-purpose DBMSs 3 History 3.1 1960s navigational DBMS 3.2 1970s relational DBMS 3.3 Database machines and appliances 3.4 Late-1970s SQL DBMS 3.5 1980s desktop databases 3.6 1980s object-oriented databases 3.7 2000s NoSQL and NewSQL databases 4 Database research 5 Database type examples 6 Database design and modeling 6.1 Database models 6.2 External, conceptual, and internal views 7 Database languages 8 Performance, security, and availability 8.1 Database storage 8.1.1 Database materialized views 8.1.2 Database and database object replication 8.2 Database security 8.3 Transactions and concurrency 8.4 Migration 8.5 Database building, maintaining, and tuning 8.6 Backup and restore 8.7 Other 9 See also 10 References 11 Further reading 12 External links

Terminology and overview

Formally, the term "database" refers to the data itself and supporting data structures.

Databases are created to operate large quantities of information by inputting, storing, retrieving, and managing that information. Databases are set up, so that one set of software programs provides all users with access to all the data. Databases use a table format, that is made up of rows and columns. Each piece of information is entered into a row, which then creates a record. Once the records are created in the database, they can be organized and operated in a variety of ways that are limited mainly by the software being used. Databases are somewhat similar to spreadsheets, but databases are more demanding than spreadsheets because of their ability to manipulate the data that is stored. It is possible to do a number of functions with a database that would be more difficult to do with a spreadsheet. The word data is normally defined as facts from which information can be derived. A database may contain millions of such facts. From these facts the database management system (DBMS) can develop information.

A "database management system" (DBMS) is a suite of computer software providing the interface between users and a database or databases. Because they are so closely related, the term "database" when used casually often refers to both a DBMS and the data it manipulates.

Outside the world of professional information technology, the term database is sometimes used casually to refer to any collection of data (perhaps a spreadsheet, maybe even a card index). This article is concerned only with databases where the size and usage requirements necessitate use of a database management system.[1]

The interactions catered for by most existing DBMS fall into four main groups:

Data definition. Defining new data structures for a database, removing data structures from the database, modifying the structure of existing data. Update. Inserting, modifying, and deleting data. Retrieval. Obtaining information either for end-user queries and reports or for processing by applications. Administration. Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information if the system fails.

A DBMS is responsible for maintaining the integrity and security of stored data, and for recovering information if the system fails.

Both a database and its DBMS conform to the principles of a particular database model.[2] "Database system" refers collectively to the database model, database management system, and database.[3]

Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.[citation needed] Since DBMSs comprise a significant economical market, computer and storage vendors often take into account DBMS requirements in their own development plans.[citation needed]

Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security. In telecommunications, bit rate or data transfer rate is the average number of bits, characters, or blocks per unit time passing between equipment in a data transmission system. This is typically measured in multiples of the unit bit per second or byte per second. Various other units may also be used to measure the data rate. Bit rates Decimal prefixes (SI) Name 	Symbol 	Multiple kilobit per second 	kbit/s 	103 megabit per second 	Mbit/s 	106 gigabit per second 	Gbit/s 	109 terabit per second 	Tbit/s 	1012 Binary prefixes (IEC 60027-2) kibibit per second 	Kibit/s 	210 mebibit per second 	Mibit/s 	220 gibibit per second 	Gibit/s 	230 tebibit per second 	Tibit/s 	240 Contents

1 Standards for prefixes and suffixes 1.1 Prefix: k vs Ki       1.2 Suffix: b vs B        1.3 Problematic variations 2 Decimal multiples of bits 2.1 Kilobit per second 2.2 Megabit per second 2.3 Gigabit per second 2.4 Terabit per second 3 Binary multiples of bits 3.1 Kibibit per second 3.2 Mebibit per second 3.3 Gibibit per second 3.4 Tebibit per second 4 Decimal multiples of bytes 4.1 Kilobyte per second 4.2 Megabyte per second 4.3 Gigabyte per second 4.4 Terabyte per second 5 Binary multiples of bytes 5.1 Kibibyte per second 5.2 Mebibyte per second 5.3 Gibibyte per second 5.4 Tebibyte per second 6 Conversion formula 7 Examples of bit rates 8 See also 9 Notes 10 References

Standards for prefixes and suffixes See also: Bit rate for the differences between gross bitrate and net bitrate and between throughput and goodput.

To be as explicit as possible, both the prefix and the suffix of the unit must be known. For example, the abbreviation 2 Mb can actually be expanded in 4 different ways (mega- vs mebi- and -bit vs -byte). The difference in the associated numbers can be significant: Unit 	Bits 	Bits / 1,000,000 Mega-bit 	1,000,000 	1.0 Mebi-bit 	1,048,576 	1.05 Mega-byte 	8,000,000 	8.0 Mebi-byte 	8,388,608 	8.39

The table above shows an approximate 5% difference between the corresponding mega- and mebi- units with a 800% difference between -bit and -byte units. Explicitness in units is important because difference can become even larger across different prefix units. Prefix: k vs Ki

k- stands for kilo, meaning 1,000, while Ki- stands for kilobinary ("kibi-"), meaning 1,024. The standardized binary prefixes such as Ki- were relatively recently introduced (in IEEE 1541-2002 that was reaffirmed on 27 March 2008). K- is often used to mean 1,024, especially in KB, the kilobyte. Suffix: b vs B

b stands for bit and B stands for byte. In the context of data rate units, one byte refers to 8 bits. For example, when a 1 Mb/s connection is advertised, it usually means that the maximum achievable download bandwidth is 1 megabit/s (million bits per second), which is actually 0.125 MB/s (megabyte per second), or about 0.1192 MiB/s (mebibyte per second). Problematic variations

In 1999, the International Electrotechnical Commission (IEC) published Amendment 2 to "IEC 60027-2: Letter symbols to be used in electrical technology – Part 2: Telecommunications and electronics." This standard, approved in 1998, introduced the prefixes kibi-, mebi-, gibi-, tebi-, pebi-, and exbi- to be used in specifying binary multiples of a quantity. The name is derived from the first two letters of the original SI prefixes followed by bi (short for binary). It also clarifies that the SI prefixes be used only to mean powers of 10 and never powers of 2. Decimal multiples of bits

These units are often not used in the proper way. See section above, "Problematic variations". Kilobit per second

A kilobit per second (kbit/s or kb/s) is a unit of data transfer rate equal to:

1,000 bits per second or   125 bytes per second.

Megabit per second

A megabit per second (Mbit/s or Mb/s; not to be confused with mbit/s which means millibit per second, or with Mbitps which means megabit picosecond) is a unit of data transfer rate equal to:

1,000,000 bits per second or   1,000 kilobits per second or    125,000 bytes per second or    125 kilobytes per second.

Gigabit per second

A gigabit per second (Gbit/s, or Gb/s) is a unit of data transfer rate equal to:

1,000 megabits per second or   1,000,000 kilobits per second or    1,000,000,000 bits per second or    125,000,000 bytes per second.

Terabit per second

A tera per second (Tbit/s, or Tb/s) is a unit of data transfer rate equal to:

1,000 gigabits per second or   1,000,000 megabits per second or    1,000,000,000 kilobits per second or    1,000,000,000,000 bits per second or    125,000,000,000 bytes per second.

Binary multiples of bits For more details on prefixes such as kibi-, mebi-, gibi-, and tebi-, see Binary prefix.

Note that for binary prefixes the prefix name (e.g. "kibi") is not capitalized, but the prefix symbol (e.g. "Ki") is. Kibibit per second

A kibibit per second (Kibit/s or Kib/s) is a unit of data transfer rate equal to:

1,024 bits per second.

Mebibit per second

A mebibit per second (Mibit/s or Mib/s) is a unit of data transfer rate equal to:

1,048,576 bits per second or   1,024 kibibits per second.

Gibibit per second

A gibibit per second (Gibit/s or Gib/s) is a unit of data transfer rate equal to:

1,073,741,824 bits per second or   1,048,576 kibibits per second or    1,024 mebibits per second.

Tebibit per second

A tebibit per second (Tibit/s or Tib/s) is a unit of data transfer rate equal to:

1,099,511,627,776 bits per second or   1,073,741,824 kibibits per second. 1,048,576 mebibits per second or   1,024 gibibits per second.

Decimal multiples of bytes

WARNING: These units are often not used in the suggested ways! See section above, "Problematic variations". Kilobyte per second

A kilobyte per second (kB/s) is a unit of data transfer rate equal to:

8,000 bits per second, or   1,000 bytes per second, or    8 kilobits per second.

Megabyte per second

(not to be confused with Mb/s – Mega bits per second) A megabyte per second (MB/s) is a unit of data transfer rate equal to:

8,000,000 bits per second, or   1,000,000 bytes per second, or    1,000 kilobytes per second, or    8 megabits per second.

Gigabyte per second

A gigabyte per second (GB/s) is a unit of data transfer rate equal to:

8,000,000,000 bits per second, or   1,000,000,000 bytes per second, or    1,000,000 kilobytes per second, or    1,000 megabytes per second, or    8 gigabits per second.

Terabyte per second

A terabyte per second (TB/s) is a unit of data transfer rate equal to:

8,000,000,000,000 bits per second, or   1,000,000,000,000 bytes per second, or    1,000,000,000 kilobytes per second, or    1,000,000 megabytes per second, or    1,000 gigabytes per second, or    8 terabits per second.

Binary multiples of bytes For more details on prefixes such as kibi-, mebi-, gibi-, and tebi-, see Binary prefix. Kibibyte per second

A kibibyte per second (KiB/s) is a unit of data transfer rate equal to:

1,024 bytes per second, or   8 kibibits per second, or    8192 bits per second.

Mebibyte per second

A mebibyte per second (MiB/s) is a unit of data transfer rate equal to:

1,048,576 bytes per second, or   1,024 kibibytes per second, or    8 mebibits per second, or    8192 kibibits per second, or    8,388,608 bits per second.

Gibibyte per second

A gibibyte per second (GiB/s) is a unit of data transfer rate equal to:

1,073,741,824 bytes per second, or   1,048,576 kibibytes per second, or    1,024 mebibytes per second, or    8 gibibits per second, or    8192 mebibits per second, or    8,388,608 kibibits per second, or    8,589,934,592 bits per second.

Tebibyte per second

A tebibyte per second (TiB/s) is a unit of data transfer rate equal to:

1,099,511,627,776 bytes per second, or   1,073,741,824 kibibytes per second, or    1,048,576 mebibytes per second, or    1,024 gibibytes per second, or    8 tebibits per second, or    8192 gibibits per second, or    8,388,608 mebibits per second, or    8,589,934,592 kibibits per second, or    8,796,093,022,208 bits per second.

Conversion formula Name 	Symbol 	bit per second 	byte per second 	bit per second (formula) 	byte per second (formula) bit per second 	bit/s 	1 	0.125 	1 	1/8 byte per second 	B/s 	8 	1 	8 	1 kilobit per second 	kbit/s 	1,000 	125 	103 	103/8 kibibit per second 	Kibit/s 	1,024 	128 	210 	27 kilobyte per second 	kB/s 	8,000 	1,000 	8x103 	103 kibibyte per second 	KiB/s 	8,192 	1,024 	213 	210 megabit per second 	Mbit/s 	1,000,000 	125,000 	106 	106/8 mebibit per second 	Mibit/s 	1,048,576 	131,072 	220 	217 megabyte per second 	MB/s 	8,000,000 	1,000,000 	8x106 	106 mebibyte per second 	MiB/s 	8,388,608 	1,048,576 	223 	220 gigabit per second 	Gbit/s 	1,000,000,000 	125,000,000 	109 	109/8 gibibit per second 	Gibit/s 	1,073,741,957 	134,217,728 	230 	227 gigabyte per second 	GB/s 	8,000,000,000 	1,000,000,000 	8x109 	109 gibibyte per second 	GiB/s 	8,589,934,592 	1,073,741,824 	233 	230 terabit per second 	Tbit/s 	1,000,000,000,000 	125,000,000,000 	1012 	1012/8 tebibit per second 	Tibit/s 	1,099,511,627,776 	137,438,953,472 	240 	237 terabyte per second 	TB/s 	8,000,000,000,000 	1,000,000,000,000 	8x1012 	1012 tebibyte per second 	TiB/s 	8,796,093,022,208 	1,099,511,627,776 	243 	240 Examples of bit rates Main article: List of device bit rates Quantity 	Unit 	bits per second 	bytes per second 	Field 	Description 56 	kbit/s 	56,000 	7,000 	Networking 	56kbit modem – 56 kbit/s – 56,000 bit/s 64 	kbit/s 	64,000 	8,000 	Networking 	64kbit/s in an ISDN B channel or best quality, uncompressed telephone line. 1,536 	kbit/s 	1,536,000 	192,000 	Networking 	24 channels of telephone in the US, or a good VTC T1. 1 	Gbit/s 	1,000,000,000 	125,000,000 	Networking 	Gigabit Ethernet 10 	Gbit/s 	10,000,000,000 	1,250,000,000 	Networking 	10 Gigabit Ethernet 1 	Tbit/s 	1,000,000,000,000 	125,000,000,000 	Networking 	SEA-ME-WE 4 submarine communications cable – 1.28 terabits per second [1] 4 	kbit/s 	4,000 	500 	Audio data 	minimum achieved for encoding recognizable speech (using special-purpose speech codecs) 8 	kbit/s 	8,000 	1,000 	Audio data 	low bit rate telephone quality 32 	kbit/s 	32,000 	4,000 	Audio data 	MW quality and ADPCM voice in telephony, doubling the capacity of a 30 chan link to 60 ch. 128 	kbit/s 	128,000 	16,000 	Audio data 	128 kbit/s MP3 – 128,000 bit/s 192 	kbit/s 	192,000 	24,000 	Audio data 	Nearly CD quality[citation needed] for a file compressed in the MP3 format 1,411.2 	kbit/s 	1,411,200 	176,400 	Audio data 	CD audio (uncompressed, 16 bit samples × 44.1 kHz × 2 channels) 2 	Mbit/s 	2,000,000 	250,000 	Video data 	30 channels of telephone audio or a Video Tele-Conference at VHS quality 8 	Mbit/s 	8,000,000 	1,000,000 	Video data 	DVD quality 27 	Mbit/s 	27,000,000 	3,375,000 	Video data 	HDTV quality 1.244 	Gbit/s 	1,244,000,000 	155,500,000 	Networking 	OC-24, a 1.244 Gbit/s SONET data channel 9.953 	Gbit/s 	9,953,000,000 	1,244,125,000 	Networking 	OC-192, a 9.953 Gbit/s SONET data channel 39.813 	Gbit/s 	39,813,000,000 	4,976,625,000 	Networking 	OC-768, a 39.813 Gbit/s SONET data channel, the fastest in current use 60 	MB/s 	480,000,000 	60,000,000 	Computer data interfaces 	USB 2.0 625 	MB/s 	5,000,000,000 	625,000,000 	Computer data interfaces 	USB 3.0 98.3 	MB/s 	786,432,000 	98,304,000 	Computer data interfaces 	FireWire IEEE 1394b-2002 S800 120 	MB/s 	960,000,000 	120,000,000 	Computer data interfaces 	Harddrive read, Samsung SpinPoint F1 HD103Uj [2] 133 	MB/s 	1,064,000,000 	133,000,000 	Computer data interfaces 	PATA 33 – 133 MB/s 188 	MB/s 	1,504,000,000 	188,000,000 	Computer data interfaces 	SATA 1.5Gbit/s – First generation 375 	MB/s 	3,000,000,000 	375,000,000 	Computer data interfaces 	SATA 3Gbit/s – Second generation 750 	MB/s 	6,000,000,000 	750,000,000 	Computer data interfaces 	SATA 6Gbit/s – Third generation 533 	MB/s 	4,264,000,000 	533,000,000 	Computer data interfaces 	PCI 133 – 533 MB/s 1250 	MB/s 	10,000,000,000 	1,250,000,000 	Computer data interfaces 	Thunderbolt 8000 	MB/s 	64,000,000,000 	8,000,000,000 	Computer data interfaces 	PCI Express x16 v2.0 12000 	MB/s 	96,000,000,000 	12,000,000,000 	Computer data interfaces 	InfiniBand 12X QDR See also

Binary prefix Bit rate List of device bandwidths Orders of magnitude (data) SI prefix