User:Wcherowi/sandbox

An example of a diff is.



$${d^2 \over dx^2}\operatorname{FANO}(x) = x \cdot \operatorname{KLEIN}(x)$$

Points on a line
In most geometries a line is a primitive (undefined) object type. A model for the geometry supplies an interpretation of what points, lines and other object types are. This abstract point of view requires a richer vocabulary to describe its concepts than does Euclidean geometry. Thus, in spherical geometry where lines are represented in the standard model by great circles of a sphere, sets of collinear points are those which lie on the same great circle. Such points do not lie on a "straight line" in the Euclidean sense and would not be thought of as being "in a row". Of course, if the geometry is Euclidean, then collinear points will lie on a straight line.

Linear maps (or linear functions), as geometric maps, are those which preserve the collinearity property (that is, they map collinear point sets to collinear point sets) which is another way to say that they map lines to lines.

Concurrency (plane dual)
In various plane geometries the notion of interchanging the roles of "points" and "lines" while preserving the relationship between them is called plane duality. Given a set of collinear points, by plane duality we obtain a set of lines all of which meet at a common point. The property that this set of lines has (meeting at a common point) is called concurrency, and the lines are said to be concurrent lines. Thus, concurrency is the plane dual notion to collinearity.

Collinear vs. co-linear
Duality has a broader meaning in mathematics than it has in just geometry. Often, the prefix "co-" is used to indicate that an object is the dual, in some sense, of some other object. Over time, the hyphen in this usage tends to disappear. Thus we find that in coalgebra theory, the notion of a colinear map (or co-linear map) is dual to the notion of a linear map for vector spaces.

Collinearity graph
Given a partial geometry P, where two points determine at most one line, a collinearity graph of P is a graph whose vertices are the points of P, where two vertices are adjacent if and only if they determine a line in P.

Usage in statistics and econometrics
In Statistics, collinearity refers to a linear relationship between two explanatory variables. Two variables are perfectly collinear if there is an exact linear relationship between the two. For example, $$ X_{1} $$ and $$ X_{2} $$ are perfectly collinear if there exist parameters $$\lambda_0$$ and $$\lambda_1$$ such that, for all observations i, we have


 * $$ X_{2i} = \lambda_0 + \lambda_1 X_{1i}. $$

Multicollinearity refers to a situation in which two or more explanatory variables in a multiple regression model are highly linearly related. We have perfect multicollinearity if, for example as in the equation above, the correlation between two independent variables is equal to 1 or -1. In practice, we rarely face perfect multicollinearity in a data set. More commonly, the issue of multicollinearity arises when there is a strong linear relationship among two or more independent variables.

Mathematically, a set of variables is perfectly multicollinear if there exist one or more exact linear relationships among some of the variables.

Antenna arrays


In telecommunications, a collinear (or co-linear) antenna array is an array of dipole antennas mounted in such a manner that the corresponding elements of each antenna are parallel and aligned, that is they are located along a common line or axis.

Photography
The collinearity equations are a set of two equations, used in photogrammetry and remote sensing to relate coordinates in an image (sensor) plane (in two dimensions) to object coordinates (in three dimensions). In the photography setting, the equations are derived by considering the central projection of a point of the object through the optical centre of the camera to the image in the image (sensor) plane. The three points, object point, image point and optical centre, are always collinear. Another way to say this is that the line segments joining the object points with their image points are all concurrent at the optical centre.

Examples
Examples include finite projective planes (λ = 1) and biplane geometries (λ = 2).

The order of a symmetric design is defined to be $$N = k - \lambda$$; For example, an order N projective plane has $$k = N+1;$$ the terminology is because the projective plane over a field of q elements has lines with $$q+1$$ points.

Polarities
A polarity, $\pi$, of a projective plane, $P$, is an involutory (i.e., of order two) bijection between the points and the lines of $P$ that preserves the incidence relation. Thus, a polarity relates a point $Q$ with a line $q$ and, following Gergonne, $q$ is called the polar of $Q$ and $Q$ the pole of $q$. An absolute point (line) of a polarity is one which is incident with its polar (pole).

A polarity may or may not have absolute points. A polarity with absolute points is called a hyperbolic polarity and one without absolute points is called an elliptic polarity. In the complex projective plane all polarities are hyperbolic but in the real projective plane only some are.

A classification of polarities over arbitrary fields follows from the classification of sesquilinear forms given by Birkhoff and von Neumann. Orthogonal polarities, corresponding to symmetric bilinear forms, are also called ordinary polarities and the locus of absolute points forms a non-degenerate conic (set of points whose coordinates satisfy an irreducible homogeneous quadratic equation) if the field does not have characteristic two. In characteristic two the orthogonal polarities are called psuedopolarities and in a plane the absolute points form a line.

Finite projective planes
If π is a polarity of a finite projective plane (which need not be desarguesian), $P$, of order $n$ then the number of its absolute points (or absolute lines), $a(\pi)$ is given by:

where $r$ is a non-negative integer. Since $a(\pi)$ is an integer, $a(\pi) = n + 2r√n + 1$ if $n$ is not a square, and in this case, π is called an orthogonal polarity.

R. Baer has shown that if $n$ is odd, the absolute points of an orthogonal polarity form an oval (that is, $a(\pi) = n + 1$ points, no three collinear), while if $n$ is even, the absolute points lie on a non-absolute line.

In summary, von Staudt conics are not ovals in finite projective planes (desarguesian or not) of even order.

Relation to other types of conics
In a pappian plane (i.e., a projective plane coordinatized by a field), if the field does not have characteristic two, a von Staudt conic is equivalent to a Steiner conic. However, R. Artzy has shown that these two definitions of conics can produce non-isomorphic objects in (infinite) Moufang planes.

French quote
Et cette proposition est généralement vraie en toutes progressions et en tous nombres premiers; de quoi je vous envoierois la démonstration, si je n'appréhendois d'être trop long.

Samlínuleiki Colinearidade

History
The birth of the concept of constructible numbers is inextricably linked with the history of the three impossible compass and straightedge constructions: duplicating the cube, trisecting an angle and squaring the circle. The restriction of using only compass and straightedge in geometric constructions is often credited to Plato due to a passage in Plutarch. According to Plutarch, Plato gave the duplication of the cube (Delian) problem to Eudoxus and Archytas and Menaechmus, who solved the problem using mechanical means, earning a rebuke from Plato for not solving the problem using pure geometry (Plut., Quaestiones convivales VIII.ii, 718ef). However, this attribution is challenged, due, in part, to the existence of another version of the story (attributed to Eratosthenes by Eutocius of Ascalon) that says that all three found solutions but they were too abstract to be of practical value. Since Oenopides (circa 450 BCE) is credited with two ruler and compass constructions, by Proclus– citing Eudemus (circa 370 - 300 BCE)–when other methods were available to him, has lead some authors to hypothesize that Oenopides originated the restriction.

The restriction to compass and straightedge is essential in making these constructions impossible. Angle trisection, for instance, can be done in many ways, several known to the ancient Greeks. The Quadratrix of Hippias of Elis, the conics of Menaechmus, or the marked straightedge (neusis) construction of Archimedes have all been used and we can add a more modern approach via paper folding to the list. Although not one of the classic three construction problems, the problem of constructing regular polygons with straightedge and compass is usually treated alongside them. The Greeks knew how to construct regular $n$-gons with $n + 1$ (for any integer $n = 2^{h}, 3, 5$) or the product of any two or three of these numbers, but other regular $n$-gons eluded them. Then, in 1796, an eighteen-year-old student named Carl Friedrich Gauss announced in a newspaper that he had constructed a regular 17-gon with straightedge and compass. Gauss' treatment was algebraic rather than geometric, in fact, he did not actually construct the polygon, but rather showed that the cosine of a central angle was a constructible number. The argument was generalized in his 1801 book Disquisitiones Arithmeticae giving the sufficient condition for the construction of a regular $n$-gon. Gauss claimed, but did not prove, that the condition was also necessary and several authors, notably Felix Klein, attributed this part of the proof to him as well.

In a paper from 1837, Pierre Laurent Wantzel proved algebraically that the problems of are impossible to solve if one uses only compass and straightedge. In the same paper he also solved the problem of determining which regular polygons are constructible:
 * doubling the cube, and
 * trisecting the angle
 * a regular polygon is constructible if and only if the number of its sides is the product of a power of two and any number of distinct Fermat primes (i.e. that the sufficient conditions given by Gauss are also necessary)

An attempted proof of the impossibility of squaring the circle was given by James Gregory in Vera Circuli et Hyperbolae Quadratura (The True Squaring of the Circle and of the Hyperbola) in 1667. Although his proof was faulty, it was the first paper to attempt to solve the problem using algebraic properties of π. It was not until 1882 that Ferdinand von Lindemann rigorously proved its impossibility, by extending the work of Charles Hermite and proving that π is a transcendental number.

Combinatorics
Combinatorics, it has been said, "may rightly be called the mathematics of counting." Few, however, would take this as a definition without a great deal of further amplification. Agreement on how this amplification should proceed is lacking and, according to, a definition of this subject is difficult because it crosses so many mathematical subdivisions. In so far as an area can be described by the types of problems it addresses, combinatorics is involved with
 * the enumeration (counting) of specified structures, sometimes referred to as arrangements or configurations, associated with finite sets,
 * the existence of such structures that satisfy certain given criteria,
 * the construction of these structures, perhaps in many ways, drawing upon ideas from several areas of mathematics, and
 * optimization, finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality criterion.

Perhaps the easiest way to give an accounting of what combinatorics is, is to describe its subdivisions with their problems and techniques as is done below. However, even this approach is not completely satisfactory since it does not capture the purely historical reasons for including or not including some topics under the combinatorics umbrella.

Although primarily concerned with finite sets, some combinatorial questions and techniques can be extended to an infinite (specifically, countable) but discrete setting. This concentration on finite sets implies that techniques used in combinatorial arguments are more likely to be drawn from algebra than analysis.

Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures. Aspects of combinatorics include counting the structures of a given kind and size (enumerative combinatorics), deciding when certain criteria can be met, and constructing and analyzing objects meeting the criteria (as in combinatorial designs and matroid theory), finding "largest", "smallest", or "optimal" objects (extremal combinatorics and combinatorial optimization), and studying combinatorial structures arising in an algebraic context, or applying algebraic techniques to combinatorial problems (algebraic combinatorics).

______________________________________________ Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics, from evolutionary biology to computer science, etc.

To fully understand the scope of combinatorics requires a great deal of further amplification, the details of which are not universally agreed upon. According to H. J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. In so far as an area can be described by the types of problems it addresses, combinatorics is involved with
 * the enumeration (counting) of specified structures, sometimes referred to as arrangements or configurations in a very general sense, associated with finite systems,
 * the existence of such structures that satisfy certain given criteria,
 * the construction of these structures, perhaps in many ways, drawing upon ideas from several areas of mathematics, and
 * optimization, finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality criterion.

Leon Mirsky has said: "combinatorics is a range of linked studies which have something in common and yet diverge widely in their objectives, their methods, and the degree of coherence they have attained." One way to define combinatorics is, perhaps, to describe its subdivisions with their problems and techniques. This is the approach that is used below. This approach is not completely satisfactory since it does not capture the purely historical reasons for including or not including some topics under the combinatorics umbrella. Although primarily concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite (specifically, countable) but discrete setting.

Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms.

A mathematician who studies combinatorics is called a ' or a '.

Affine transformation


In geometry, an affine transformation, or an affinity (from the Latin, affinis, "connected with") is an automorphism of an affine space. More specifically, it is a function mapping an affine space onto itself that preserves the dimension of any affine subspaces (meaning that it sends points to points, lines to lines, planes to planes, and so on) and also preserves the ratio of the lengths of parallel line segments. Consequently, sets of parallel affine subspaces remain parallel after an affine transformation. An affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a straight line.

If $X$ is the point set of an affine space, then every affine transformation on $X$ can be represented as the composition of a linear transformation on $X$ and a translation of $X$. Unlike a purely linear transformation, an affine transformation need not preserve the origin of the affine space. Thus, every linear transformation is affine, but not every affine transformation is linear.

Examples of affine transformations include translation, scaling, homothety, similarity transformation, reflection, rotation, shear mapping, and compositions of them in any combination and sequence.

Viewing an affine space as the compliment of a hyperplane at infinity of a projective space, the affine transformations are the projective transformations of that projective space that leave the hyperplane at infinity invariant, restricted to the compliment of that hyperplane.

A generalization of an affine transformation is an affine map (or affine homomorphism or affine mapping) between two affine spaces, over the same field $k$, which need not be the same. Let $h ≥ 2$ and $(X, V, k)$ be two affine spaces with $X$ and $Z$ the point sets and $V$ and $W$ the respective associated vector spaces over the field $k$. A map $(Z, W, k)$ is an affine map if there exists a linear map $f: X → Z$ such that $m_{f} : V → W$ for all $x, y$ in $X$.

Definition
Let $m_{f} (x − y) = f (x) − f (y)$ be an affine space of dimension at least two, with $X$ the point set and $V$ the associated vector space over the field $k$. A semiaffine transformation $f$ of $X$ is a bijection of $X$ onto itself satisfying: These two conditions express what is precisely meant by the expression that "$S$ preserves parallelism".
 * 1) If $X$ is a $(X, V, k)$-dimensional affine subspace of $X$, $d$ is also a $f (S)$-dimensional affine subspace of $S$.
 * 2) If $T$ and $X$ are parallel affine subspaces of $f$, then $d$.

These conditions are not independent as the second follows from the first. Furthermore, if the field $k$ has at least three elements, the first condition can be simplified to: $f$ is a collineation, that is, it maps lines to lines.

If the dimension of the affine space $f (S) || f (T)$ is at least two, then an affine transformation is a semiaffine transformation $f$ that satisfies the condition: If $(X, V, k)$ and $x ≠ y$ are points of $X$ such that the line segments $\overline{xy}$ and $\overline{pq}$ are parallel, then
 * $$\frac{\|\overline{pq}\|}{\|\overline{xy}\|} = \frac{\|\overline{f(p)f(q)}\|}{\|\overline{f(x)f(y)}\|}.$$

Affine lines
If the dimension of the affine space is one, that is, the space is an affine line, then any permutation of $X$ would automatically satisfy the conditions to be a semiaffine transform. So, an affine transformation of an affine line is defined as any permutation $f$ of the points of $X$ such that if $p ≠ q$ and $x ≠ y$ are points of $X$, then
 * $$\frac{\|\overline{pq}\|}{\|\overline{xy}\|} = \frac{\|\overline{f(p)f(q)}\|}{\|\overline{f(x)f(y)}\|}.$$

Structure
By the definition of an affine space, $V$ acts on $X$, so that, for every pair $p ≠ q$ in $(x, v)$ there is associated a point $y$ in $X$. We can denote this action by $X × V$. Here we use the convention that $v(x) = y$ are two interchangeable notations for an element of $V$. By fixing a point $c$ in $X$ one can define a function $v = v$ by $m_{c} : X → V$. For any $c$, this function is one-to-one, and so, has an inverse function $m_{c}(x) = cx$ given by $m_{c}^{-1} : V → X$. These functions can be used to turn $X$ into a vector space (with respect to the point $c$) by defining:
 * $$x + y = m_c^{-1}\left(m_c(x) + m_c(y)\right),\text{ for all } x,y \text{ in } X,$$ and
 * $$rx = m_c^{-1}\left(r m_c(x)\right), \text{ for all } r \text{ in } k \text{ and } x \text{ in } X.$$

This vector space has origin $c$ and formally needs to be distinguished from the affine space $X$, but common practice is to denote it by the same symbol and mention that it is a vector space after an origin has been specified. This identification permits points to be viewed as vectors and vice versa.

For any linear transformation $λ$ of $V$, we can define the function $m_{c}^{-1}(v) = v(c)$ by
 * $$L(c, \lambda)(x) = m_c^{-1}\left(\lambda (m_c (x))\right) = c + \lambda (\vec{cx}).$$

Then $L(c, λ) : X → X$ is an affine transformation of $X$ which leaves the point $c$ fixed. It is a linear transformation of $X$, viewed as a vector space with origin $c$.

Let $σ$ be any affine transformation of $X$. Pick a point $c$ in $X$ and consider the translation of $X$ by the vector $$\bold{w} = \overrightarrow{c \sigma (c)}$$, denoted by $L(c, λ)$. Translations are affine transformations and the composition of affine transformations is an affine transformation. For this choice of $c$, there exists a unique linear transformation $λ$ of $V$ such that
 * $$\sigma (x) = T_{\bold{w}} \left( L(c, \lambda)(x) \right).$$

That is, an arbitrary affine transformation of X is the composition of a linear transformation of $X$ (viewed as a vector space) and a translation of $X$.

This representation of affine transformations is often taken as the definition of an affine transformation (with the choice of origin being implicit). All Euclidean spaces are affine, but there are affine spaces that are non-Euclidean. In affine coordinates, which include Cartesian coordinates in Euclidean spaces, each output coordinate of an affine map is a linear function (in the sense of calculus) of all input coordinates. Another way to deal with affine transformations systematically is to select a point as the origin; then, any affine transformation is equivalent to a linear transformation (of position vectors) followed by a translation.

Cosets
In mathematics, specifically group theory, a subgroup $H$ of a group $T_{w}$ may be used to decompose the underlying set of $G$ into disjoint equal size pieces called cosets. There are two types of cosets, left cosets and right cosets. Cosets (of either type) have the same number of elements (cardinality) as does $H$. Furthermore, $H$ itself is a coset, which is both a left coset and a right coset. The number of left cosets of $G$ in $H$ is equal to the number of right cosets of $G$ in $H$. The common value is called the index of $G$ in $H$ and is usually denoted by $G$.

Cosets are a basic tool in the study of groups; for example they play a central role in Lagrange's theorem that states that for any finite group $[G : H]$, the number of elements of every subgroup $G$ of $G$ divides the number of elements of $G$. Cosets of a particular type of subgroup (normal subgroup) can be used as the elements of another group called a quotient group or factor group. Cosets also appear in other areas of mathematics such as vector spaces and error-correcting codes.

Definition
Let $H$ be a subgroup of the group $G$ whose operation is written multiplicatively (juxtaposition means apply the group operation). Given an element $H$ of $G$, then a left coset of $H$ in $G$ is obtained by multiplying each element of $H$ by the fixed element $g$ (where $g$ is the left factor). In symbols this is,

Right cosets are defined similarly, except that the element $g$ is now a right factor, that is,

If the group operation is written additively, as is often the case when the group is abelian, the notation used changes to $g$ or $gH = {&thinsp;gh : h an element of H&thinsp;}$, respectively.

Properties
Because $Hg = {&thinsp;hg : h an element of H&thinsp;}$ is a subgroup, it contains the group's identity element, with the result that the element $g + H$ belongs to the coset $H + g$. If $H$ belongs to $g$ then $gH$. Thus every element of $x$ belongs to exactly one left coset of the subgroup $gH$.

Elements $xH=gH$ and $G$ belong to the same left coset of $H$, that is, $g$ if and only if $x$ belongs to $H$. More can be said here. Define two elements of $G$, say $x$ and $y$, to be equivalent with respect to the subgroup $H$ if $xH = gH$ belongs to $H$. This is then an equivalence relation on $G$ and the equivalence classes of this relation are the left cosets of $H$. As with any set of equivalence classes, they form a partition of the underlying set.

Similar statements apply to right cosets.

If $g^{-1}x$ is an abelian group, then $H$ for every subgroup $x^{-1}y$ of $G$ and every element $gH = Hg$ of $H$. In general, given an element $g$ and a subgroup $G$ of a group $G$, the right coset of $g$ with respect to $G$ is also the left coset of the conjugate subgroup $H$ with respect to $H$, that is, $g$.

A subgroup $g^{−1}Hg&thinsp;$ of a group $g$ is a normal subgroup of $Hg = g&thinsp;(&thinsp;g^{−1}Hg&thinsp;)$ if and only if for all elements $N$ of $G$ the corresponding left and right cosets are equal, that is, $G$. Furthermore, the cosets of $g$ in $G$ form a group called the quotient group or factor group.

An application from coding theory
A binary linear code is an $n$-dimensional subspace $C$ of an $m$-dimensional vector space $V$ over the binary field GF(2). As $V$ is an additive abelian group, $C$ is a subgroup of this group. Codes can be used to correct errors that can occur in transmission. When a codeword (element of $C$) is transmitted some of its bits may be altered in the process and the task of the receiver is to determine the most likely codeword that the corrupted received word could have started out as. This procedure is called decoding and if only a few errors are made in transmission it can be done effectively with only a very few mistakes. One method used for decoding uses an arrangement of the elements of $V$ (a received word could be any element of $V$) into a standard array. A standard array is a coset decomposition of $V$ put into tabular form in a certain way. Namely, the top row of the array consists of the elements of $C$, written in any order, except that the zero vector should be written first. Then, an element of $V$ with a minimal number of ones that does not already appear in the top row is selected and the coset of $C$ containing this element is written as the second row (namely, the row is formed by taking the sum of this element with each element of $C$ directly above it). This element is called a coset leader and there may be some choice in selecting it. Now the process is repeated, a new vector with a minimal number of ones that does not already appear is selected as a new coset leader and the coset of $C$ containing it is the next row. The process ends when all the vectors of $V$ have been sorted into the cosets.

An example of a standard array for the 2-dimensional code $C$ = {00000, 01101, 10110, 11011} in the 5-dimensional space $V$ (with 32 vectors) is as follows:


 * {| class="wikitable"

! 00000 ! 01101 ! 10110 ! 11011
 * 10000
 * 11101
 * 00110
 * 01011
 * 01000
 * 00101
 * 11110
 * 10011
 * 00100
 * 01001
 * 10010
 * 11111
 * 00010
 * 01111
 * 10100
 * 11001
 * 00001
 * 01100
 * 10111
 * 11010
 * 11000
 * 10101
 * 01110
 * 00011
 * 10001
 * 11100
 * 00111
 * 01010
 * }
 * 00011
 * 10001
 * 11100
 * 00111
 * 01010
 * }
 * }

The decoding procedure is to find the received word in the table and then add to it the coset leader of the row it is in. Since in binary arithmetic adding is the same operation as subtracting, this always results in an element of $C$. In the event that the transmission errors occurred in precisely the non-zero positions of the coset leader the result will be the right codeword. In this example, if a single error occurs, the method will always correct it, since all possible coset leaders with a single one appear in the array.

Syndrome decoding can be used to improve the efficiency of this method. It is a method of computing the correct coset (row) that a received word will be in. For an $n$-dimensional code $C$ in an $m$-dimensional binary vector space, a parity check matrix is an $gN = Ng$ matrix $H$ having the property that $H$ $x$ = $0$ if and only if $x$ is in $C$. The vector $H$ $x$ is called the syndrome of $x$, and by linearity, every vector in the same coset will have the same syndrome. To decode, the search is now reduced to finding the coset leader that has the same syndrome as the received word.

Mutually orthogonal Latin squares (MOLS)
A set of Latin squares of the same order such that every pair of squares are orthogonal (that is, form a Graeco-Latin square) is called a set of mutually orthogonal Latin squares and usually abbreviated as MOLS or MOLS(n) when the order is made explicit.

For example, a set of MOLS(4) is given by:
 * $$\begin{matrix} 1&2&3&4 \\ 2&1&4&3\\ 3&4&1&2\\4&3&2&1 \end{matrix}\qquad\qquad \begin{matrix} 1&2&3&4 \\ 4&3&2&1 \\2&1&4&3 \\3&4&1&2 \end{matrix}\qquad\qquad\begin{matrix}1&2&3&4 \\3&4&1&2\\4&3&2&1\\2&1&4&3\end{matrix}.$$

And a set of MOLS(5):
 * $$\begin{matrix}1&2&3&4&5 \\2&3&4&5&1\\3&4&5&1&2\\4&5&1&2&3\\5&1&2&3&4\end{matrix} \qquad\begin{matrix}1&2&3&4&5\\3&4&5&1&2\\5&1&2&3&4\\2&3&4&5&1\\4&5&1&2&3\end{matrix}

\qquad\begin{matrix}1&2&3&4&5\\5&1&2&3&4\\4&5&1&2&3\\3&4&5&1&2\\2&3&4&5&1\end{matrix} \qquad\begin{matrix}1&2&3&4&5\\4&5&1&2&3\\2&3&4&5&1\\5&1&2&3&4\\3&4&5&1&2\end{matrix}.$$ While it is possible to represent MOLS in a "compound" matrix form similar to the Graeco-Latin squares, for instance,
 * {| class="wikitable"

for the MOLS(5) example above, it is more typical to compactly represent the MOLS as an orthogonal array (see below).
 * 1,1,1,1
 * 2,2,2,2
 * 3,3,3,3
 * 4,4,4,4
 * 5,5,5,5
 * 2,3,5,4
 * 3,4,1,5
 * 4,5,2,1
 * 5,1,3,2
 * 1,2,4,3
 * 3,5,4,2
 * 4,1,5,3
 * 5,2,1,4
 * 1,3,2,5
 * 2,4,3,1
 * 4,2,3,5
 * 5,3,4,1
 * 1,4,5,2
 * 2,5,1,3
 * 3,1,2,4
 * 5,4,2,3
 * 1,5,3,4
 * 2,1,4,5
 * 3,2,5,1
 * 4,3,1,2
 * }
 * 1,5,3,4
 * 2,1,4,5
 * 3,2,5,1
 * 4,3,1,2
 * }

Orthogonal array
An orthogonal array, OA($k,n$), of strength two and index one is an $N$ array $A$ ($k$ ≥ 2 and $n$ ≥ 1, integers) with entries from a set of size $n$ such that within any two columns of $A$, every ordered pair of symbols appears in exactly one row of $A$.

An OA($s$ + 2, $n$) is equivalent to $s$ MOLS($n$). For example, the MOLS(4) example given above and repeated here,
 * $$\begin{matrix} 1&2&3&4 \\ 2&1&4&3\\ 3&4&1&2\\4&3&2&1\\& L_1&& \end{matrix}\qquad\qquad \begin{matrix} 1&2&3&4 \\ 4&3&2&1 \\2&1&4&3 \\3&4&1&2 \\& L_2&&\end{matrix}\qquad\qquad\begin{matrix}1&2&3&4 \\3&4&1&2\\4&3&2&1\\2&1&4&3\\& L_3&&\end{matrix}$$

can be used to form an OA(5,4):
 * {| class="wikitable"

! r ! c ! L1 ! L2 ! L3 where the entries in the columns labeled r and c denote the row and column of a position in a square and the rest of the row for fixed r and c values is filled with the entry in that position in each of the Latin squares. This process is reversible; given an OA($s$,$n$) with $s$ ≥ 3, choose any two columns to play the r and c roles and then fill out the Latin squares with the entries in the remaining columns.
 * 1
 * 1
 * 1
 * 1
 * 1
 * 1
 * 2
 * 2
 * 2
 * 2
 * 1
 * 3
 * 3
 * 3
 * 3
 * 1
 * 4
 * 4
 * 4
 * 4
 * 2
 * 1
 * 2
 * 4
 * 3
 * 2
 * 2
 * 1
 * 3
 * 4
 * 2
 * 3
 * 4
 * 2
 * 1
 * 2
 * 4
 * 3
 * 1
 * 2
 * 3
 * 1
 * 3
 * 2
 * 4
 * 3
 * 2
 * 4
 * 1
 * 3
 * 3
 * 3
 * 1
 * 4
 * 2
 * 3
 * 4
 * 2
 * 3
 * 1
 * 4
 * 1
 * 4
 * 3
 * 2
 * 4
 * 2
 * 3
 * 4
 * 1
 * 4
 * 3
 * 2
 * 1
 * 4
 * 4
 * 4
 * 1
 * 2
 * 3
 * }
 * 3
 * 4
 * 1
 * 4
 * 3
 * 2
 * 1
 * 4
 * 4
 * 4
 * 1
 * 2
 * 3
 * }
 * 3
 * }

Pencil (mathematics)


In geometry, a pencil is a family of geometric objects with a common property, for example the set of lines that pass through a given point in a plane, or the set of circles that pass through two given points in a plane.

Although the definition of a pencil is rather vague, the common characteristic is that the pencil is completely determined by any two of its members. Analogously, a set of geometric objects that are determined by any three of its members is called a bundle. Thus, the set of all lines through a point in three-space is a bundle of lines, any two of which determine a pencil of lines. To emphasize the two dimensional nature of such a pencil, it is sometimes referred to as a flat pencil.

Any geometric object can be used in a pencil. The common ones are lines, planes, circles, conics, spheres and general curves. Even points can be used. A pencil of points is the set of all points on a given line. A more common term for this set is a range of points.

Pencil of lines
In a plane, let $u$ and $v$ be two distinct intersecting lines. For concreteness, suppose that $u$ has the equation, $G$ and $v$ has the equation $(m − n) × m$. Then

represents, for suitable scalars $λ$ and $μ$, any line passing through the intersection of $u$ = 0 and $v$ = 0. This set of lines passing through a common point is called a pencil of lines. The common point of a pencil of lines is called the vertex of the pencil.

In an affine plane with the reflexive variant of parallelism, a set of parallel lines forms an equivalence class called a pencil of parallel lines. This terminology is consistent with the above definition since in the unique projective extension of the affine plane to a projective plane a single point (point at infinity) is added to each line in the pencil of parallel lines, thus making it a pencil in the above sense in the projective plane.

Pencil of planes
A pencil of planes, is the set of planes through a given straight line in three-space, called the axis of the pencil. The pencil is sometimes referred to as a fan or a sheaf. For example, the meridians of the globe are defined by the pencil of planes on the axis of Earth's rotation.

Two intersecting planes meet in a line in three-space, and so, determine the axis and hence all of the planes in the pencil.

In higher dimensional spaces, a pencil of hyperplanes consists of all the hyperplanes that contain a subspace of codimension 2. Such a pencil is determined by any two of its members.

Pencil of circles
Any two circles in the plane have a common radical axis, which is the line consisting of all the points that have the same power with respect to the two circles. A pencil of circles (or coaxial system) is the set of all circles in the plane with the same radical axis. To be inclusive, concentric circles are said to have the line at infinity as a radical axis.

There are five types of pencils of circles, the two families of Apollonian circles in the illustration above represent two of them. Each type is determined by two circles called the generators of the pencil. When described algebraically, it is possible that the equations may admit imaginary solutions. The types are:
 * An elliptic pencil (red family of circles in the figure) is defined by two generators that pass through each other in exactly two points. Every circle of an elliptic pencil passes through the same two points. An elliptic pencil does not include any imaginary circles.
 * A hyperbolic pencil (blue family of circles in the figure) is defined by two generators that do not intersect each other at any point. It includes real circles, imaginary circles, and two degenerate point circles called the Poncelet points of the pencil. Each point in the plane belongs to exactly one circle of the pencil.
 * A parabolic pencil (as a limiting case) is defined where two generating circles are tangent to each other at a single point. It consists of a family of real circles, all tangent to each other at a single common point. The degenerate circle with radius zero at that point also belongs to the pencil.
 * A family of concentric circles centered at a common center (may be considered a special case of a hyperbolic pencil where the other point is the point at infinity).
 * The family of straight lines through a common point; these should be interpreted as circles that all pass through the point at infinity (may be considered a special case of an elliptic pencil).

Properties
A circle that is orthogonal to two fixed circles is orthogonal to every circle in the pencil they determine.

The circles orthogonal to two fixed circles form a pencil of circles.

Two circles determine two pencils, the unique pencil that contains them and the pencil of circles orthogonal to them. The radical axis of one pencil consists of the centers of the circles of the other pencil. If one pencil is of elliptic type, the other is of hyperbolic type and vice versa.

Projective space of circles
There is a natural correspondence between circles in the plane and points in three-dimensional projective space; a line in this space corresponds to a one-dimensional continuous family of circles, hence a pencil of points in this space is a pencil of circles in the plane.

Specifically, the equation of a circle of radius $n^{2} × k$ centered at a point ($aX + bY + c = 0$),
 * $$ (x-p)^2 + (y-q)^2 = r^2,$$

may be rewritten as
 * $$ \alpha(x^2 + y^2) - 2\beta x - 2\gamma y + \delta = 0,$$

where $a'X + b'Y + c′ = 0$. In this form, multiplying the quadruple ($λu + μv = 0$) by a scalar produces a different quadruple that represents the same circle; thus, these quadruples may be considered to be homogeneous coordinates for the space of circles. Straight lines may also be represented with an equation of this type in which $r$ and should be thought of as being a degenerate form of a circle. When $p,q$, we may solve for $α = 1, β = p, γ = q, and δ = p^{2} + q^{2} &minus; r^{2}$, and $α,β,γ,δ$; the latter formula may give $α = 0$ (in which case the circle degenerates to a point) or $α ≠ 0$ equal to an imaginary number (in which case the quadruple ($p = β/α, q = γ/α$) is said to represent an imaginary circle).

The set of affine combinations of two circles ($r =√(p^{2} + q^{2} &minus; δ/α)$), ($r = 0$), that is, the set of circles represented by the quadruple
 * $$ z(\alpha_1,\beta_1,\gamma_1,\delta_1)+(1-z)(\alpha_2,\beta_2,\gamma_2,\delta_2)$$

for some value of the parameter $r$, forms a pencil; the two circles being the generators of the pencil.

Cardioid as envelope of a pencil of circles


Another type of pencil of circles can be obtained as follows. Consider a given circle (called the generator circle) and a distinguished point P on the generator circle. The set of all circles that pass through P and have their centers on the generator circle form a pencil of circles. The envelope of this pencil is a cardioid.

Pencil of conics
A (non-degenerate) conic is completely determined by five points in general position (no three collinear) in a plane and the system of conics which pass through a fixed set of four points (again in a plane and no three collinear) is called a pencil of conics. The four common points are called the base points of the pencil. Through any point other than a base point, there passes a single conic of the pencil. This concept generalizes a pencil of circles.

In a projective plane defined over an algebraically closed field any two conics meet in four points (counted with multiplicity) and so, determine the pencil of conics based on these four points. Furthermore, the four base points determine three line pairs (degenerate conics through the base points, each line of the pair containing exactly two base points) and so each pencil of conics will contain at most three degenerate conics.

A pencil of conics can be represented algebraically in the following way. Let $α,β,γ,δ$ and $α_{1},β_{1},γ_{1},δ_{1}$ be two distinct conics in a projective plane defined over an algebraically closed field $α_{2},β_{2},γ_{2},δ_{2}$. For every pair $z$ of elements of $C_{1}$, not both zero, the expression:


 * $$\lambda C_1 + \mu C_2$$

represents a conic in the pencil determined by $C_{2}$ and $K$. This symbolic representation can be made concrete with a slight abuse of notation (using the same notation to denote the object as well as the equation defining the object.) Thinking of $λ, μ$, say, as a ternary quadratic form, then $K$ is the equation of the "conic $C_{1}$". Another concrete realization would be obtained by thinking of $C_{2}$ as the 3×3 symmetric matrix which represents it. If $C_{1}$ and $C_{1} = 0$ have such concrete realizations then every member of the above pencil will as well. Since the setting uses homogeneous coordinates in a projective plane, two concrete representations (either equations or matrices) give the same conic if they differ by a non-zero multiplicative constant.

Pencil of plane curves
More generally, a pencil is the special case of a linear system of divisors in which the parameter space is a projective line. Typical pencils of curves in the projective plane, for example, are written as


 * $$ \lambda C + \mu C' = 0 \, $$

where $C_{1}$, $C_{1}$ are plane curves.

History
Desargues is credited with inventing the term "pencil of lines" (ordonnance de lignes).

An early author of modern projective geometry G. B. Halsted introduced many terms, most of which are now considered to be archaic. For example, "Straights with the same cross are copunctal." Also "The aggregate of all coplanar, copunctal straights is called a flat-pencil" and "A piece of a flat-pencil bounded by two of the straights as sides, is called an angle."

"The aggregate of all planes on a straight is called an axial-pencil."