Talk:System of linear equations/Archive 1

Markup
Greetings and appolgies for such a mundane comment.

I notice that Daniel Brockman added some formatting to the page and I was curious about some of it.


 * 1) Why use x 1  ( x 1) instead of x1 ( x1 )?
 * 2) Why use &amp;minus; instead of -? The Konqueror browser has trouble with &amp;minus;.

thanks,

Eric Blossom 19:26, 6 May 2004 (UTC)

Referring to 1, we don't use, we use the wikimarkup, which is easier to type. And which renders as the same thing. 2- I don't know Dysprosia 22:24, 6 May 2004 (UTC)

The answer to 2 is that it looks a lot better/is clearer to read. I know there are some of these HTML characters that are 'deprecated', in the sense that they are not universally warranted.

Charles Matthews 11:09, 7 May 2004 (UTC)

Relaxation
Aren't we missing some information on relaxation methods for solving systems? (SOR, SSOR, etc.) David.Monniaux 20:37, 15 December 2004 (UTC)

Wiki markup on all equations
I noticed some of the equations looked all inconsistent. some were wikimarkup, some looked html. so I made them all wiki-markup.

btw, to do this use \, Okay? look at the source code in the article [if you can] for reference.

I dont see why anybody would complain if i made it all wikified or whatever, but comment if you wish.

and also, should we change \frac{1}{2}x_2 in line 4 to x_2/2? It wouldnt stretch the equation, so I changed that to. It could also be 0.5x_2, comment on that if you want to say something.--EulerGamma 18:33, 6 August 2006 (UTC)

"Linear" Equations?
The title of the page is "System of Linear equations;" however, the three equations shown in the pictures are quadratic equations. What's going on here? —Preceding unsigned comment added by The Edit0r (talk • contribs) 03:06, 17 November 2006 (UTC)


 * Are you confusing this article with some other article? I don't see any pictures. -- Jitse Niesen (talk) 03:59, 17 November 2006 (UTC)

Merging Suggestion
I am proposing a merging of this article with part of Elementary algebra's info on Systems of Linear Equations and Simultaneous equations. Comments are appreciated and the explanation and discussion are being held here: Wikipedia talk:WikiProject Mathematics. (Quadrivium 23:17, 17 November 2006 (UTC))

Homogeneous linear equation
This article appears to be the only place on WP which discusses homogeneous linear equations. I believe that this subject deserves a little more attention than the few lines presented here, in particular since the solutions have a slightly different character than for the inhomogeneous case. I plan to either expand the topic here or move it to a separate article. Any opinion? --KYN 18:46, 5 August 2007 (UTC)


 * I'd keep it here until the whole article gets too unwieldy through sheer size, or the section on the homogeneous case starts to dominate, at which time we should go to summary style or create a spinout. Note that there is also some material in Null space, which might be put here. --Lambiam 19:50, 5 August 2007 (UTC)


 * I take it you have found Homogeneous coordinates and Linear map (currently a redirect from Homogeneous linear transformation) there may be some relavent info there. --Salix alba (talk) 21:12, 5 August 2007 (UTC)


 * I have now moved the homogeneous case to a separate section and linked it to Null space. I will later expand that section with more ways to find null spaces, e.g. SVD.  Sorry Salix alba, but I couldn't find anything useful in the references you suggested, please correct me if I'm wrong.  --KYN 21:59, 5 August 2007 (UTC)


 * Correcting myself since there is a connectio to Homogeneous coordinates via the rewrite of the inhomogeneous case to a homogeneous equation. This is now included.  --KYN 09:32, 6 August 2007 (UTC)

Solving a linear system
First, thanks to user for the recent work.

That said, there is more that must be done, especially in the section "Solving a linear system". Save for a handwave in the article intro, we see no hint that this is an enormous topic in numerical linear algebra, with many algorithms and implementations in regular use to solve huge systems, and subject to substantial ongoing research (especially concerning exploitation of parallel architectures). We see no mention of LU decomposition (much less, say, SOR), and we don't even see a mention of (partial) pivoting in Gaussian elimination — which is essential. Beyond the algorithms for real and complex systems handled in floating point, we should also mention the algorithms used by modern computer algebra systems, especially when division is not available.

Volunteers? :-) --KSmrqT 09:09, 30 August 2007 (UTC)

I agree numerical techniques for solving linear systems are poorly treated in the current version of the article. I think the ideal thing would be a complete rewriting and expansion of the "other methods" subsection, which survived wholesale from the previous version. (I know essentially nothing about numerical techniques, so I couldn't really rewrite this part.) Two points to keep in mind: Thanks to anyone who volunteers to help. Jim 17:25, 30 August 2007 (UTC)
 * 1) Systems of linear equations are very important in theoretical mathematics, for reasons that I've outlined in the article.  The article as a whole should not treat linear systems exclusively as a certain type of "problem", though that point of view is certainly appropriate for the section on finding solutions.
 * 2) I tried very hard to make the article accessible to high school students or, at the very least, college students taking a first course in linear algebra.  Any discussion of numerical techniques should strive to be introductory and non-technical, with links to additional articles having more information.


 * The current section on "other methods" is sticking out as a sore thumb now that the rest is rewritten. However, I agree with Jim that we should try to keep the article accessible. Hence, my preference would be to write just two or three paragraphs on this and put the rest in another article, say solution of systems of linear equations (please find a better title). Given that Gaussian elimination is not really explained, I think it's not that much of a problem that pivoting is not mentioned.
 * I saw that Jim is drafting an article at User:Jim.belk/Draft:Row reduction. We already have this article and articles on Gaussian elimination (which is not in good shape, if I remember correctly) and LU decomposition. These all cover very similar topics. I'm wondering what the best way is to organize these articles. Jim, what are your plans? -- Jitse Niesen (talk) 02:22, 31 August 2007 (UTC)


 * I'm teaching linear algebra this semester, and I'm generally planning to work on linear algebra articles at roughly the same rate that my class covers the topics. The drafts I keep on my user page represent potential revisions&mdash;I tend to write a lot of drafts and then only post the ones that turn out well.


 * User:Jim.belk/Draft:Row reduction is an attempt to make a general article on row reduction, presumably to cover the following topics:
 * A description of the elementary row operations
 * A discussion of the two echelon forms, and why they are helpful
 * A brief description of both Gaussian elimination and Gauss-Jordan elimination, without too many details. A discussion on things like running time and numerical stability would fit in well here, although I don't know enough about numerical analysis to write it myself.
 * A discussion of the many applications of row reduction in linear algebra, e.g. determining whether vectors are linearly independent, finding the inverse of a matrix, computing determinants, etc.
 * A discussion of the role of row reduction in higher mathematics, e.g. to simplify presentations of abelian groups.
 * My current thoughts are that a general article like this is needed for an introduction to the topic, with links to Gaussian elimination, Gauss-Jordan elimination, and possibly other articles to describe specific algorithms more in depth. Jim 22:42, 31 August 2007 (UTC)

-

More points to keep in mind


 * 1) There is no discussion on the difference between small and large linear systems in terms of solution methods.  The three methods "Elimination of variables", "Row reduction", and "Cramer's rule" are (I believe) of lesser relevance when the size of the system is large and instead other methods are better suited for these cases.  The numerical resolution of the calculations is probably involved in some way when it comes to determining when more complex methods are better.
 * 2) Similarly, there is a difference between symbolic solutions, where the solution has infinite accuracy, and numerical solutions where a limited accuracy is assumed.  For example, various types of factorizations of A, which are relevant for large systems, cannot be made if the solution should be symbolic.
 * 3) Maybe even say something about the fact that in the case of large systems you actually need a computer to do the calculations for you, and that standard programs (Matlab, Mathematica, Maple, Ma... ) can solve VERY large systems (1000 equations/variables appears to be possible in many cases) and you don't need to know exactly HOW they compute the solution, but you need to understand the character of the solution; does it really exist, is it unique, how accurate is it computed?
 * 4) There also needs to be a discussion about the fact that some systems (not just linear) have no solution but it is still relevant to talk about an "approximate solution".  Take three linear equations in two variables, corresponding to three lines in the plane which almost intersect at a single point.  There is no "exact" solution, but we can try to determine an approximate solution in one way or another.  This situation appears in many practical situations when the elements of A are not known exactly, for example due to noise.
 * 5) I wrote something about rewriting non-homogeneous systems into homogeneous ones, essentially using homogeneous coordinates, but this gone now.  This "trick" has applications, for example in computer vision.  I believe that this connection is worth mentioning, at least with one sentence.

--KYN 20:32, 30 August 2007 (UTC)


 * I numbered your points for easier reference.
 * 1. Elimination of variables and row reduction are basically the same. One specific algorithm, LU decomposition with partial pivoting (essentially Gaussian elimination), is the standard method if you don't know anything about the matrix A. Other methods all require A to have some sort of structure.
 * 2. I think decompositions still work fine in a symbolic context if the matrices are over real or complex numbers. If you can't divide, you have a problem. I don't know anything about what to do then, but apparently KSmrq knows a bit about this. —This is part of a comment by Jitse Niesen which got interrupted by the following:
 * I was thinking about the two cases 1) when the entries of A are purely symbolic, e.g., a_11, a_12, etc, and want to compute a closed form expression for the solution in terms of these symbols, 2) when the elements of A are represented as floating point number in a fix resolution, e.g. 64 bits. I'm not sure, but guess that the solution strategies could be different for these two cases? --KYN 08:43, 31 August 2007 (UTC)
 * 3. I don't think we should tell the readers what they do and do not know. We can mention that there are many programs who can solve linear systems. By the way, "large" in this context ranges from hundreds to billions; 1000 is not considered very large. —This is part of a comment by Jitse Niesen which got interrupted by the following:
 * What I mean is that in practice, one way of solving your system of equations is to feed it to a suitable computer software and ask it kindly to solve it please. This is probably how many many engineers deal with solving their equations.  They (we) don't think so much about which particular method has been chosen by the software for computing the solution.  They do, however, have to be (should be) aware that the resulting solution has to be critically examined, e.g. in terms of accuracy, uniqueness, etc.  Solving systems of equations is not just about choosing method, in practice this is done for you by the software, it is also about understanding the solution. --KYN 08:44, 31 August 2007 (UTC)
 * The article treats the existence/uniqueness part pretty well, in my opinion. It doesn't mention conditioning/accuracy though. That should probably be added. -- Jitse Niesen (talk) 10:20, 31 August 2007 (UTC)
 * 4. I agree with that. Something about the least square solution should be mentioned -- Jitse Niesen (talk) 02:22, 31 August 2007 (UTC)
 * Maybe a subsection on "overdetermined systems" immediately after Cramer's rule? Jim 23:00, 31 August 2007 (UTC)
 * 5. Hmm, not so sure about that one. -- Jitse Niesen (talk) 02:22, 31 August 2007 (UTC)
 * Not sure about what? I'm not suggesting that the rewrite is an approach that suits every purpose, it is rather an option that in some cases leads to robust algorithms for computing things like optical flow in computer vision.  See it as a theoretical connection between the inhomogeneous and homogeneous cases which sometimes is useful to know about.  Also, it some cases this trick "moves" singularities from the computation of the solution to the solution itself.  Think about setting up a system of equation for determining the point (x,y) which is defined as the intersection of two lines.  The parameters of the lines go into A which means that if the lines are parallel, x cannot be computed.  Consequently, we have to examine either the lines or A to discover this situation.  Embedding the problem in a homogeneous representation of x leads to an equation system which can always be solved, even for parallel lines, but the singularity is encoded in the homogeneous representation of x.  Sometimes this is useful, sometimes not.  Why not say something about it? --KYN 08:43, 31 August 2007 (UTC)
 * I meant that I don't know about whether it should be included. We cannot include everything. Sure, I've seen it used, but not that often, and definitely not in a computational setting (actually, I find it rather strange that it's more robust). But I haven't read everything, and perhaps it is important in some settings. -- Jitse Niesen (talk) 10:20, 31 August 2007 (UTC)
 * There's actually a lot more that could be said about homogeneous systems, e.g. the connection with homogeneous differential equations. In the long run, this subject deserves its own article. Jim 23:00, 31 August 2007 (UTC)

--KYN 08:43, 31 August 2007 (UTC)
 * 1) In summary, it seems reasonable moving the details and varieties on solving the equations to a separate article, but keep a short discussion on the topic here, e.g., demonstrating how a 2 x 2 system can be solved and point out the possibilities of generalizing to N x M, but also pointing out that such a generalization has to made with some care.
 * 2) Is the LU-strategy proposed above also best choice for the homogeneous case?  Are the inhomogeneous and homogeneous equations not solved by different methods, in particular for large systems?
 * 3) Maybe also mention something about the solution x=A-1b.  If you have learned about inverting matrices, this is an intuitive approach which should work, but there are apparently issues related to computation time and accuracy which motivate why you shouldn't, in particular for large systems.


 * 1. I think the article explains pretty well how to solve a 2 x 2 system, so I'm not sure what you mean.
 * 2. Usually, one assumes in the inhomogeneous case that the matrix is invertible, while in the homogeneous case, the matrix is singular. I'm not sure what the proper approach is in the latter case; stability is likely to be an important issue (is solving homogeneous systems well-conditioned?). Matlab uses the singular value decomposition to calculate the null space of a matrix, so I suppose that's a reasonable approach. But to answer your question: I doubt LU is the best choice for the homogeneous case.
 * 3. Yes, the connection with the matrix inverse could well be mentioned. However, I don't think we should mention it in the context of solving a linear system, because the usual method for computing a matrix inverse is to solve a system of linear equations with multiple right-hand sides. -- Jitse Niesen (talk) 10:20, 31 August 2007 (UTC)

I added a few comments to the above discussion. I also added a short section at the beginning of the article on solving 2 x 2 systems. (At least some part of the article should be accessible to beginning algebra students and other readers with a layman's knowledge of mathematics.) Jim 23:00, 31 August 2007 (UTC)

Hazards of floating point
Numerical analysts know "pivoting" is essential in Gaussian elimination, but others may never have seen why. So here is some recommended reading, freely available online: Both of these give small, dramatic examples of catastrophic failure in solving a well-conditioned system using floating point arithmetic if elimination does not reorder the equations. Five minutes of easy reading can be an eye-opener; you will never ignore pivoting again! --KSmrqT 22:54, 1 September 2007 (UTC)
 * 1) Forsythe, George E. (1970) Stanford tech. report CS-TR-70-147: Pitfalls in computation, or why a math book isn't enough, pp. 17–20
 * 2) Moler, Cleve. (2004) Numerical Computing with MATLAB, Ch. 2: Linear Equations, pp. 7–8

Forsythe, Malcolm, and Moler, in Computer methods for mathematical computations (ISBN 978-0-13-165332-0), first solve the system
 * $$ \begin{bmatrix} 10 & -7 & 0 \\ -3 & 2 & 6 \\ 5 & -1 & 5 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 7 \\ 4 \\ 6 \end{bmatrix} \,\!$$

(with pivoting) and obtain the solution (0,−1,1). Then, to emphasize the importance of pivoting, they consider the slightly altered system
 * $$ \begin{bmatrix} 10 & -7 & 0 \\ -3 & 2.099 & 6 \\ 5 & -1 & 5 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 7 \\ 3.901 \\ 6 \end{bmatrix}, \,\!$$

which should have the same solution. Working in decimal with five significant digits, the first step of elimination yields
 * $$ \begin{bmatrix} 10 & -7 & 0 \\ 0 & -1.0 \times 10^{-3} & 6 \\ 0 & 2.5 & 5 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 7 \\ 6.001 \\ 2.5 \end{bmatrix} . \,\!$$

The next step multiplies 2.5×103 times the second equation and adds it to the third, which on the right-hand side multiplies 6.001. The product, 1.50025×104 has excess digits, so is chopped to 1.5002×104 before it is added to 2.5. Thus the last equation becomes
 * $$ 1.5005 \times 10^4 x_3 = 1.5004 \times 10^4, \,\!$$

yielding x3 = 0.99993 rather than 1. Harmless error, apparently; but when backsubstituted into the second equation the result is
 * $$ -1.0 \times 10^{-3} x_2 + (6)(0.99993) = 6.001, \,\!$$

yielding x2 = −1.5, which is looking less attractive. Now the first equation becomes
 * $$ 10 x_1 + (-7)(-1.5) = 7, \,\!$$

yielding x1 = −0.35. In other words, failing to pivot has changed the correct answer, (0,−1,1) considerably, to (−0.35,−1.5,0.99993), despite the fact that the problem is very well conditioned. (Its singular values are approximately 13.6, 7.9 and 1.4, implying a condition number less than 10.)

Exchanging the second and third equations, so we always divide by the column element of largest magnitude, fixes the problem. Now we multiply 0.0004 times the (exchanged) second equation, which on the right-hand side produces 0.001, and add that to the (exchanged) third equation. Thus the last equation becomes
 * $$ 6.002 x_3 = 6.002, \,\!$$

yielding x3 = 1, with full accuracy. Now backsubstitution produces
 * $$ 2.5 x_2 + (5)(1) = 2.5, \,\!$$

yielding x2 = −1; and finally
 * $$ 10 x_1 + (-7)(-1) = 7, \,\!$$

yielding x1 = 0.

Eventually this example, or one like it, should find its way into the article. It would be criminal neglect to discuss Gaussian elimination and not show this simple but vital detail. --KSmrqT 19:23, 4 September 2007 (UTC)


 * It probably shouldn't go in this article, since the Gaussian elimination algorithm isn't even discussed in detail. I strongly agree that it should be somewhere on Wikipedia.  The article on pivoting is currently a short stub, and could really benefit from the example above. (I would copy it myself, but I don't want to do that without your permission.) In addition, the Gaussian elimination article could probably use a section on "numerical stability", with a link to the article on pivoting.  Jim 00:24, 5 September 2007 (UTC)


 * It would be nice to have an example of instability under naive elimination (presumably more in place at Pivoting or Gaussian elimination), but can we copy this example just like that? Such stability trouble is almost guaranteed if |a11a22−a12a21| is small compared to |a11a32−a12a31|, which is easily brought about. I never understood the cavalier attitude implied by "Gaussian elimination is usually considered to be stable in practice if you use partial pivoting ... even though there are examples for which it is unstable" (Gaussian elimination) and "Complete pivoting ... is usually not necessary to ensure numerical stability" (Pivoting). For small systems the gain by using only partial pivoting is small. It is the large systems that are most at risk of instability, and these often arise in contexts where grossly inaccurate solutions can have unacceptable practical consequences. --Lambiam 05:31, 5 September 2007 (UTC)


 * The example is not mine, so it's no use asking me; I'm not sure when it was first introduced, but it's reused in &sect;2.6 of Numerical Computing with MATLAB (mentioned above). As long as credit is given to one of these sources, feel free to use it.
 * I think my "criminal neglect" remark indicates how strongly I feel that we need to raise awareness about pivoting in this article. The only effective way I know to do that is to use an example. I'll guarantee you, just saying "Pivoting is important" won't do the job.
 * Why use partial pivoting instead of complete pivoting? Experience. Nick Higham, in Accuracy and Stability of Numerical Algorithms (ISBN 978-0-89871-355-8), talks about this in more detail (&sect;9.3). He shows that, in theory, there is a class of matrices for which a pivoting factor grows exponentially. But he also notes that until recently no naturally occurring examples of problem matrices were known. (A similar situation is well-known in linear optimization, where the simplex algorithm works much better in practice than its theoretical worst-case would suggest.) I do not know if a theoretical explanation has yet been found; but Higham says in the book (p. 180) "Explaining this fact remains one of the major unsolved problems in numerical analysis." --KSmrqT 17:59, 5 September 2007 (UTC)

Independence section needs a lot of work
Right now every sentence is either wrong or confused: The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence.

First of all, title the section 'linear independence' not 'independence'. Second, the term 'logical independence' in the last line is used without defining it. I think it has no mathematical definition. Eliminate it. Third, saying none of the equations 'can be derived algebraically' from the others is a convoluted and imprecise way to say that no vector is a linear combination of the other vectors. That is the essence of linear independence. Say it clearly and accurately.

Fourth, is the stuff about the size of the solution set really needed in an elementary section on linear independence? Also, if the system is inconsistent removing an equation won't necessarily make it consistent, so the claim is wrong anyway. So aside from being likely wrong, the amount of explanation required to make it correct highlites that you should just cut it. I just cut it.

Fifth, talk of equations containing 'new information' about variables when independent is not very helpful. What, mathematically, does it mean to carry new information when linearly independent? Cut that line it is useless. Or define things more clearly. —The preceding unsigned comment was added by User: (talk • contribs) – Please sign your posts!


 * Thank you for your suggestion. When you feel an article needs improvement, please feel free to make those changes. Wikipedia is a wiki, so anyone can edit almost any article by simply following the  link at the top. The Wikipedia community encourages you to be bold in updating pages. Don't worry too much about making honest mistakes — they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills.  New contributors are always welcome. You don't even need to log in (although there are many reasons why you might want to).    --Lambiam 13:11, 26 December 2007 (UTC)

Section on Equivalence needs work
It says: Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice-versa. Equivalent systems convey precisely the same information about the values of the variables. In particular, two linear systems are equivalent if and only if they have the same solution set.

One, what does it mean to derive equations algebraically? This is imprecise or imcomplete. What type of algebraic operations on which parts of the system of equations? Mention elementary row operations or something more specific. IF system A can be changed into system B using elementary row operations, then they are equivalent. Something like that.

Two, why this description of in terms of conveying the same information? This is imprecise, perhaps even inaccurate, and shouldn't be there. Start the section with the definition of equivalence, which means having the same solution set. Then mention elementary row operations leading to equivalent systems. —The preceding unsigned comment was added by User: (talk • contribs) – Please sign your posts!


 * See above. --Lambiam 13:12, 26 December 2007 (UTC)

Linear system
Linear system doesn't point here, and isn't even ambiguous with this page. Seems odd, given that I've called Ax = b a linear system as long as I've known any english term for it (i.e. since college). I have no idea how to go about changing this. Bhudson (talk) 17:30, 18 June 2008 (UTC)


 * You're right; I added a line at the top of linear system. -- Jitse Niesen (talk) 18:14, 18 June 2008 (UTC)

Underdetermined
Underdetermined redirects here, but the word is not mentioned in this article. I also find it strange that that would redirect here whereas overdetermined system gets its own article. This area could use some cleanup. Dcoetzee 00:05, 6 October 2008 (UTC)

Formulating a problem as a system of linear equations
Is there language or are there ideas describing when a problem can or cannot be formulated as a system of linear equations? For example, as I understand it a network of linear springs in N dimensions (N > 1) cannot be formulated as a system of linear equations if the springs are allowed to rotate in space, but a similar system can be formulated as a linear system if the springs cannot rotate. (That is, each spring is a constraint defining the relative position from particle i to particle j as opposed to simply constraining the distance between two particles, regardless of direction.)

A similar example is that a rigid-body transformation in 3D is a nonlinear operation on vectors in 3-space but you can formulate it as a linear transformation in homogeneous coordinates.

Yet another example is logistic regression, which, as I understand it, is nonlinear at face value, but can be transformed to a linear regression by way of the logit function.

What language is there to talk about this sort of reformulation, and are there rules to say when you can and when you cannot reformulate a problem to be linear? —Ben FrantzDale (talk) 14:37, 5 December 2008 (UTC)
 * Also related: The Kernel trick, which apparently is an "algorithm to solve a non-linear problem by mapping the original non-linear observations into a higher-dimensional space, where the linear classifier is subsequently used". —Ben FrantzDale (talk) 06:03, 6 December 2008 (UTC)

i need
i need the diffrent between linear and non linear equations —Preceding unsigned comment added by 79.141.17.188 (talk) 14:48, 15 January 2009 (UTC)

Simple. Non-linear equations have a variable raised to a power higher than the first (like x2 or y3) while linear equations do not. --116.14.68.30 (talk) 06:16, 12 June 2009 (UTC)

As for the general solutions earlier...
Please incorporate them into the article, but please change them to the wiki format of equations. That is, both in the article and in here. Thank you. --116.14.68.30 (talk) 06:24, 12 June 2009 (UTC)

Multiplication
hi there. i edited the system of linear equations page recently because i didn't realize how unmoderated the process was. in the history list, the person who re-corrected the page said this was the place to make my comment (i hope this is the place, please correct me again if not).

i am just having to learn a lot of math and programming very fast to make up for my twenties which i spent as a musician. i am now an effects animator trying to work out a bunch of fluid dynamic stuff for a movie i am working on.

here is the content of my confusion-- line 27: "where A is an m-by-n matrix above, x is a column vector with n entries and b is a column vector with b(sub)m entries" my understanding is that the matrix can be rectangular, i.e.: m not equal to n. if so, the column vector x should have m entries, shouldn't it?

and the vector b should have m entries too, as b(sub)m could be any float, n'est-ce pas?

any help is much appreciated.

i can't believe how great wikipedia is. better than any textbook i've been able to find, and completely community-driven/ad-free... heaven. thank you.

p.s.: sorry about the plain text, haven't yet had a chance to figure out the math formatting.


 * Hi, please refer to matrix multiplication. Multiplication can only happens between an m-by-n matrix and an n-by-k matrix. So if A is m-by-n then the column vector x should be n-by-1. Of course, the result matrix b is m-by-1. wshun 04:35, 13 Aug 2003 (UTC)

ok. i think i get the row/column part, but now i am confused about the equivalence. on one hand, there is the matrix representation of all the coefficients of the system, A, which has m &times n elements. on the other hand, it is stated that this can also be represented Ax=b, which seems like the whole matrix above plus one part of a matrix multiplication. if

uh. hold on. i just got it.

x1 through xn are the same on every row. (duh). and x is just those values, being multiplied by each row of the matrix of coefficients to make each value of b. ok. thanks for the help. i'll go ahead and submit this in case it helps some other equally confused person...

ok, i just read the part of the style notes that warns to talk of the article, not the subject. if someone more wiki-savvy wants to delete this talk, please do. on the bright side, a small error was fixed because of this discussion.

Hello. Please sign your comments. It's a wonder SineBot hasn't caught you yet. --116.14.68.30 (talk) 06:26, 12 June 2009 (UTC)

General solution for 1×1 systems of linear equations
ax + b = c

This is the general structure of a 1×1 system of linear equations. The solution can be shown to be:

x = (c – b)/a --116.14.68.30 (talk) 06:20, 12 June 2009 (UTC)

General solution for 2×2 systems of linear equations
a1x + b1y = c1

a2x + b2y = c2

This is the general structure for 2×2 systems of linear equations. Through substitution and elimination, it can be shown that:

x = (c1b2 – b1c2)/(a1b2 – b1a2)

y = (a1c2 – c1a2)/(a1b2 – b1a2) --116.14.68.30 (talk) 06:21, 12 June 2009 (UTC)


 * Wow! They're really invaluable. --116.14.34.220 (talk) 14:41, 16 June 2009 (UTC)

General solution for 3×3 systems of linear equations
I'll add something here when I figure it out. In the meantime, someone else can do it too! --116.14.68.30 (talk) 06:24, 12 June 2009 (UTC)

General solution for 4×4 systems of linear equations
I'll add something here when I figure it out. In the meantime, someone else can do it too! --116.14.68.30 (talk) 06:24, 12 June 2009 (UTC)

Incompleted thought ...
What a useful article for learning the basics of Linear Equations, and a little beyond. I'm trying to puzzle out what is meant by the edit by User 84.119.67.208 on 7 Sept 2011:
 * An equation written as
 * is called linear if ...  (basically, if it's a linear map.)

Does he mean: "An equation written as: f(x) =c"? If that's the intent, is it adding any useful insight to observe that f(x) is called linear if it's a linear map, else nonlinear? Thanks for any aid.Bookerj (talk) 17:46, 8 December 2011 (UTC)
 * In fact the edit is the next one, by IP User: 67.194.195.74. Its looks as a unfinished edit. In any case, it is misplaced, and not needed, as the article refers to linear equation where one may find this definition (or a similar one). Thus, I'll revert this edit. D.Lazard (talk) 19:03, 8 December 2011 (UTC)

Equivalence
A more accurate definition of equivalence would be helpful: what does mean "can be derived algebraically"? If that means "is a linear combination", then the edit by Ott0 was correct. But in the case of an inconsistent systems, 1 = 0 is a linear combination of the equations, and any other equation may obtained algebraically by multiplying 1 = 0 by any linear function. Thus, as it is (and was before Ott0 edit), the definition of equivalence implies that all the inconsistent systems are equivalent. D.Lazard (talk) 21:48, 14 May 2012 (UTC)
 * I have implemented the preceding remark in the article. However, it may be better to change the order of the sentences, in order to define "equivalent" as "having the same solution set" and to give the other possible definitions as equivalent properties. D.Lazard (talk) 08:50, 15 May 2012 (UTC)

System?
Define a system. Anything that gets a input and gives a output (different from the input) is a system.

Mathematical equations help describe the world around us. It is implictly a systemic view of the world. Everything around us that has life is a system. Some are obvious (like living creatures) while many are subtle (climate change).

For instance, simultaneous equations are used in logic circuit design to minimise the physical connections required to power up 7 LEDs of a counter. If you watch closely, the equations already illustrated inthe article, you see multiple outputs for common input values. That's why we say y is a function of x. The word "function" indicates a system. Anwar (talk) 13:16, 11 July 2008 (UTC)


 * The word system has many meanings in English. The Oxford English Dictionary lists more than twenty definitions, the first one of which is "An organized or connected group of objects." To me, that seems a more likely explanation for the term "system of equations". But the bottom line is, put it in if you have a supporting reference; if not, it should be left out. -- Jitse Niesen (talk) 14:03, 11 July 2008 (UTC)


 * Anwar is right, we already use the left curly brace notation to wrap a set of functions, and the wrapped function is precisely a function, the algebraic structure is a category. System has too many meanings in English, and we have precise language in Mathematics, like family of set, collection of objects, and tons load of objects in algebra with different properties can adequately describe mathematics in-place of the word system. Consider WP:NATURALDIS and WP:PRECISE, in the article i hardly see any section, not even a single sentence provide definition of system. Why? Lack of citation and reference source right? The problem spreads across all mathematical article, which has the word system on its title, System of linear equations, Nonlinear system, Linear system. You see the tags on each articles, ultimately, all lack references or agreement from experts. Autonomous system has system points to Simultaneous equations as a shorthand of simultaneous equations, and guess what, the only reference there is a random webpage talks about system of equations from Simmons Bruce, which is WP:SELFPUBLISH source. Therefore, the phenomenons point to WP:CANVASSING and violation of WP:NOR. Although it diverges our the writing style and verifiability of Mathematics Portal in general, on the other hand, i cant deny system being a WP:COMMONNAME. Therefore, I suggest articles should either be removed or better be rephrased and all go where they belongs Category:System Science, see also System and Portal:Systems science. --14.198.221.19 (talk) 08:42, 30 March 2013 (UTC)

Geometric interpretation
For a 3x3 matrix an inconsistant system usually represtents a prism. An infinite number of solutions implies the planes interstect in a straight line, kind of like a book. A unique solution means they meet at a point.

I'm trying to find out how to tell if an inconsistant 3x3 system is parallel or a prism. Anyone help? — Preceding unsigned comment added by 212.159.75.167 (talk) 16:17, 2 January 2007 (UTC)


 * Parallel planes are easy to recognise: the coefficients of x, y, z in the equations are proportional. Charles Matthews 17:45, 2 January 2007 (UTC)


 * Maybe I am stupid, but can anyone explain to me how the intersection of planes can be single point?
 * This is my first post in wikipedia by the way, my apologies if I violated some protocol. —Preceding unsigned comment added by 134.58.253.57 (talk) 10:34, 29 September 2009 (UTC)


 * i agree with above point (re: intersection of planes being a single point) and so, i removed "a single point" from the intersection of planes paragraph. — Preceding unsigned comment added by 50.151.127.101 (talk) 12:52, 14 September 2013 (UTC)


 * i can explain this now: although the intersection of *TWO* planes cannot be a single point, the intersection of MORE than two planes can indeed be a single point. — Preceding unsigned comment added by 50.151.127.101 (talk) 12:58, 14 September 2013 (UTC)