User talk:Garley~enwiki/sandbox

Garleyolson
Computer programming From Wikipedia, the free encyclopedia

It has been suggested that Programmer be merged into this article. (Discuss) Proposed since April 2013. Software development process Coding Shots Annual Plan high res-5.jpg A software developer at work Activities and steps Requirements Specification Architecture Construction Design Testing Debugging Deployment Maintenance Methodologies Waterfall Prototype model Incremental Iterative V-Model Spiral Scrum Cleanroom RAD DSDM RUP XP Agile Lean Dual Vee Model TDD FDD Supporting disciplines Configuration management Documentation Quality assurance (SQA) Project management User experience design Tools Compiler Debugger Profiler GUI designer IDE Build automation v t e Computer programming (often shortened to programming, scripting, or coding) is the process of designing, writing, testing, debugging, and maintaining the source code of computer programs. This source code is written in one or more programming languages (such as C++, C#, Java, Python, Smalltalk, etc.). The purpose of programming is to create a set of instructions that computers use to perform specific operations or to exhibit desired behaviors. The process of writing source code often requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic. Within software engineering, programming (the implementation) is regarded as one phase in a software development process. There is an ongoing debate on the extent to which the writing of programs is an art form, a craft, or an engineering discipline.[1] In general, good programming is considered to be the measured application of all three, with the goal of producing an efficient and evolvable software solution (the criteria for "efficient" and "evolvable" vary considerably). The discipline differs from many other technical professions in that programmers, in general, do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers." Because the discipline covers many areas, which may or may not include critical applications, it is debatable whether licensing is required for the profession as a whole. In most cases, the discipline is self-governed by the entities which require the programming, and sometimes very strict environments are defined (e.g. United States Air Force use of AdaCore and security clearance). However, representing oneself as a "Professional Software Engineer" without a license from an accredited institution is illegal in many parts of the world. Another ongoing debate is the extent to which the programming language used in writing computer programs affects the form that the final program takes. This debate is analogous to that surrounding the Sapir–Whorf hypothesis[2] in linguistics and cognitive science, which postulates that a particular spoken language's nature influences the habitual thought of its speakers. Different language patterns yield different patterns of thought. This idea challenges the possibility of representing the world perfectly with language, because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community. Contents [hide] 1 History 2 Modern programming 2.1 Quality requirements 2.2 Readability of source code 2.3 Algorithmic complexity 2.4 Methodologies 2.5 Measuring language usage 2.6 Debugging 3 Programming languages 4 Programmers 5 See also 6 References 7 Further reading 8 External links History[edit]

See also: History of programming languages

Ada Lovelace created the first algorithm designed for processing by a computer and is usually recognized as history's first computer programmer. Ancient cultures had no conception of computing beyond simple arithmetic. The only mechanical device that existed for numerical computation at the beginning of human history was the abacus, invented in Sumeria circa 2500 BC. Later, the Antikythera mechanism, invented some time around 100 BC in ancient Greece, was the first mechanical calculator utilizing gears of various sizes and configuration to perform calculations,[3] which tracked the metonic cycle still used in lunar-to-solar calendars, and which is consistent for calculating the dates of the Olympiads.[4] The Kurdish medieval scientist Al-Jazari built programmable Automata in 1206 AD. One system employed in these devices was the use of pegs and cams placed into a wooden drum at specific locations, which would sequentially trigger levers that in turn operated percussion instruments. The output of this device was a small drummer playing various rhythms and drum patterns.[5][6] The Jacquard Loom, which Joseph Marie Jacquard developed in 1801, uses a series of pasteboard cards with holes punched in them. The hole pattern represented the pattern that the loom had to follow in weaving cloth. The loom could produce entirely different weaves using different sets of cards. Charles Babbage adopted the use of punched cards around 1830 to control his Analytical Engine. The first computer program was written for the Analytical Engine by mathematician Ada Lovelace to calculate a sequence of Bernoulli numbers.[7] The synthesis of numerical calculation, predetermined operation and output, along with a way to organize and input instructions in a manner relatively easy for humans to conceive and produce, led to the modern development of computer programming. Development of computer programming accelerated through the Industrial Revolution.

Data and instructions were once stored on external punched cards, which were kept in order and arranged in program decks. In the 1880s, Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media, above, had been for lists of instructions (not data) to drive programmed machines such as Jacquard looms and mechanized musical instruments. "After some initial trials with paper tape, he settled on punched cards..."[8] To process these punched cards, first known as "Hollerith cards" he invented the keypunch, sorter, and tabulator unit record machines.[9] These inventions were the foundation of the data processing industry. In 1896 he founded the Tabulating Machine Company (which later became the core of IBM). The addition of a control panel (plugboard) to his 1906 Type I Tabulator allowed it to do different jobs without having to be physically rebuilt. By the late 1940s, there were several unit record calculators, such as the IBM 602 and IBM 604, whose control panels specified a sequence (list) of operations and thus were programmable machines. The invention of the von Neumann architecture allowed computer programs to be stored in computer memory. Early programs had to be painstakingly crafted using the instructions (elementary operations) of the particular machine, often in binary notation. Every model of computer would likely use different instructions (machine language) to do the same task. Later, assembly languages were developed that let the programmer specify each instruction in a text format, entering abbreviations for each operation code instead of a number and specifying addresses in symbolic form (e.g., ADD X, TOTAL). Entering a program in assembly language is usually more convenient, faster, and less prone to human error than using machine language, but because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets also have different assembly languages. Some of the earliest computer programmers were women during World War II. According to Dr. Sadie Plant, programming is essentially feminine—not simply because women, from Ada Lovelace to Grace Hopper, were the first programmers, but because of the historical and theoretical ties between programming and what Freud called the quintessentially feminine invention of weaving, between female sexuality as mimicry and the mimicry grounding Turing's vision of computers as universal machines. Women, Plant argues, have not merely had a minor part to play in the emergence of digital machines...Theirs is not a subsidiary role which needs to be rescued for posterity, a small supplement whose inclusion would set the existing records straight...Hardware, software, wetware-before their beginnings and beyond their ends, women have been the simulators, assemblers, and programmers of the digital machines.[10]

Wired control panel for an IBM 402 Accounting Machine. In 1954, FORTRAN was invented; it was the first high level programming language to have a functional implementation, as opposed to just a design on paper.[11][12] (A high-level language is, in very general terms, any programming language that allows the programmer to write programs in terms that are more abstract than assembly language instructions, i.e. at a level of abstraction "higher" than that of an assembly language.) It allowed programmers to specify calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text, or source, is converted into machine instructions using a special program called a compiler, which translates the FORTRAN program into machine language. In fact, the name FORTRAN stands for "Formula Translation". Many other languages were developed, including some for commercial programming, such as COBOL. Programs were mostly still entered using punched cards or paper tape. (See computer programming in the punch card era). By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were developed that allowed changes and corrections to be made much more easily than with punched cards. (Usually, an error in punching a card meant that the card had to be discarded and a new one punched to replace it.) As time has progressed, computers have made giant leaps in the area of processing power. This has brought about newer programming languages that are more abstracted from the underlying hardware. Popular programming languages of the modern era include ActionScript, C++, C#, Haskell, HTML with PHP, Java, JavaScript, Objective-C, Perl, Python, Ruby, Smalltalk, SQL, Visual Basic, and dozens more.[13] Although these high-level languages usually incur greater overhead, the increase in speed of modern computers has made the use of these languages much more practical than in the past. These increasingly abstracted languages typically are easier to learn and allow the programmer to develop applications much more efficiently and with less source code. However, high-level languages are still impractical for a few programs, such as those where low-level hardware control is necessary or where maximum processing speed is vital. Computer programming has become a popular career in the developed world, particularly in the United States, Europe, and Japan. Due to the high labor cost of programmers in these countries, some forms of programming have been increasingly subject to offshore outsourcing (importing software and services from other countries, usually at a lower wage), making programming career decisions in developed countries more complicated, while increasing economic opportunities for programmers in less developed areas, particularly China and India. Modern programming[edit]

Question book-new.svg This section relies largely or entirely upon a single source. Relevant discussion may be found on the talk page. Please help improve this article by introducing citations to additional sources. (August 2010) Quality requirements[edit] Whatever the approach to software development may be, the final program must satisfy some fundamental properties. The following properties are among the most relevant: Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms, and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors). Robustness: how well a program anticipates problems not due to programmer error. This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services and network connections, and user error. Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose, or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user i

== --Garley (talk) 19:59, 10 June 2013 (UTC)Heading text--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)--Garley (talk) 19:59, 10 June 2013 (UTC)

Heading text
== ]] er interaction): the less, the better. This also includes correct disposal of some resources, such as cleaning up temporary files and lack of memory leaks. Readability of source code[edit] In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability. Readability is important because programmers spend the majority of their time reading, trying to understand and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study[14] found that a few simple readability transformations made code shorter and drastically reduced the time to understand it. Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability.[15] Some of these factors include: Different indentation styles (whitespace) Comments Decomposition Naming conventions for objects (such as variables, classes, procedures, etc.)