Talk:Gauss–Seidel method/Archive 1

Negative Signs Mistakes
I believe that

"A = D − L − U"

Should be

"A = D + L + U".

Any objections?

I agree, and I just made that edit. 128.8.94.207 (talk) 02:54, 15 July 2008 (UTC)

Negative signs
I believe the sentence should read: " where the matrices D, -L, and -U represent the diagonal, strictly lower triangular, and strictly upper triangular parts of A, respectively "

and not:

" negative strictly lower triangular, and negative strictly upper triangular parts of the coefficient matrix "

--Pedroteixeira 01:30, 20 August 2007 (UTC)

See above; I edited this. Should be okay now. 128.8.94.207 (talk) 02:55, 15 July 2008 (UTC)

Typo in the C algorithm and a weirdness
1. The second loop should ends with n and not m 2. The array temp is useless —Preceding unsigned comment added by 62.39.72.218 (talk) 16:03, 14 August 2008 (UTC)

1. Assuming A is n rows x m columns, I agree. But usually A is m x n. However, I am not convinced the algorithm works with non-square matrices. 2. Agree. 82.92.241.131 (talk) 09:05, 22 August 2008 (UTC)

Alternative Implementation of Gauss-Seidal method
As I have seen in different web sites they have other alternative methods to solve the system. It would not be bad if they will be added to this article then we may compare those different approaches to analyze which applied method is more sufficient and accurate. Moreover it will be great if we then provide examples for each type to see which one requires less steps and gives accurate numbers. — Preceding unsigned comment added by 132.235.44.179 (talk) 13:39, 28 April 2009 (UTC)

copyright violation
The section Advantages of Gauss-Seidel method appears to have been lifted straight from MathWorld.—3mta3 (talk) 16:46, 12 June 2009 (UTC)
 * I've removed this text, and did a bit of a clean-up of the article. Is the pseudo-code and matlab still of any use? I don't know if it really adds anything to the article. —3mta3 (talk) 12:51, 13 June 2009 (UTC)

code
I removed the Matlab code as it wasn't actually real code, and it involved an inversion of a matrix, which defeats the whole point of using the method (computationally, it is quicker to use forward substitution). If someone wants to clean it up and add it back in, feel free. —3mta3 (talk) 08:41, 23 June 2009 (UTC)

Better Examples
I was reading this and see that 2x2 matrices are inuse as examples, which is fine How about 3x3 matrices so that higher order considerations are seen —Preceding unsigned comment added by 24.69.171.178 (talk) 23:42, 27 November 2010 (UTC)

(L*) ^(-1) values not entirely correct
if the lower matrix L* for A is:

16   0  7   -11

(L*)^(-1) has been written as

.0625   0.0000   .0398    -.0909

.0398 is approximately 25^(-1)

>>>>>>>>>>>>>>> it should be noted that 7^(-1) = .142857   not    .0398

i believe your L* should read:

.0625  .0000  .1429  -.0909

please note that much of this information has been a helpful review for me today, i simply hope to correct this little error so as not to confuse people who are not familiar with this sort of thing at all and will not be able to spot this as a mistake. —Preceding unsigned comment added by 75.111.27.77 (talk) 23:29, 28 November 2010 (UTC)

Convergence criteria not given for the algorithm
If the algorithm outline is given, either explicit convergence criteria should be given or a link to another page that outlines possible choices ought to be used.

One of the simplest, albeit a poor choice, is checking the residual ||Az - b||, for some candidate solution z. A better choice would probably be something of the form:

(phi_k - phi_{k+1}) / phi_k

for some scalar phi that maps A, z_k, and b to a scalar for some candidate solutions z_k and z_{k+1} such that

0 <= phi_{k+1} <= phi_k

is always true.

—Preceding unsigned comment added by 76.27.5.236 (talk) 06:16, 22 February 2011 (UTC)

python code
The backslash line continuations are unneeded. Python continues the statement to the closing bracket, ), ], or }.  David Lambert, 2014-FEB-20  — Preceding unsigned comment added by 144.81.85.9 (talk) 13:31, 20 February 2014 (UTC)

C code is wrong?
I think the C code is wrong. Apparently, someone was trying to be clever and combined two loops into one, by just adding the if( i!=j ) check. HOWEVER, the first loop depends on values of x from the current epoch, whereas the second is supposed to depend on values of x from the previous. As such, this algorithm seems as if it will give the wrong result! Or am I confused?65.183.135.231 (talk) 03:55, 6 May 2008 (UTC)

Actually the C code is correct! But the C code does not introduce a "history" of the old solutions x[i]. Hence the already modified values and the "old" values are just used in one loop! —Preceding unsigned comment added by 91.65.206.30 (talk) 09:12, 14 August 2008 (UTC)

The derivation of the Jacobi method is very illustrative here, and a similar approach can be used to derive the Gauss-Seidel iteration. Actually, the only difference is that the L matrix is kept on the left hand side. The formula before the last reads
 * $$(D+L) x^{(k+1)} = b - U x^{(k)} $$

For the i-th line in the equation, the D and L matrices contain coefficients that multiply with the entries 1..i in the new x vector, as D + L is lower diagonal. Simultaneously, U is strictly upper diagonal, and contains coefficients that multiply with the entries i+1..n of the old x vector. The solution of this system of equations is obtained by a plain forward substitution. 82.92.241.131 (talk) 08:51, 22 August 2008 (UTC)

The code is correct since it dynamically updates x, but is incredibly misleading. A much better implementation would invoke two loops.152.3.154.238 (talk) —Preceding undated comment was added at 14:22, 23 September 2008 (UTC).

The two loops would have basically the same code, except for the range of the argument. That would be misleading too. My advice would be to add comments, starting with an explanation what big phi stores and the connection to x_i^(k) of the original formula, like for example: "after every full run of the inner loop, big phi stores X^k for the current k. In between, it will store X_l^k-1 for all l < j and X_l^k for all l > j" (although a minimized code like that can always be misunderstood the first time one reads it) 131.246.191.250 (talk) 08:21, 3 February 2015 (UTC)

A needs to be square?
So far I assumed A is square. What happens if A is not? If A is m x n, b must have m entries, and x n entries. For m smaller than n, we lack equations to compute all entries of x, and for m larger than n, only the first n are used. I think that A needs to be square. 82.92.241.131 (talk) 09:05, 22 August 2008 (UTC)


 * The A matrix needs to be a square matrix in order that the system of linear equations has a unique solution. This is a fundamental fact from linear algebra. Otherwise, the systems of equations will be either underdetermined or overdetermined. You don't get to just pick the dimensions of A willy-nilly.74.249.79.213 (talk) 21:58, 29 October 2008 (UTC)


 * In addition, look at the usual applications of the method. Something like a sparse matrix with a strong diagonal is common and the method is tailored to something like that (typical cases in which it is diagonally dominant). If the matrix represents influences of neighboring elements, a square shape is the logical result. 131.246.191.250 (talk) 08:28, 3 February 2015 (UTC)

applicability
The article does not mention that the Gauss-Seidel method is only applicable to diagonally dominant, or symmetric positive definite matricies.

For a source, see:

http://mathworld.wolfram.com/Gauss-SeidelMethod.html


 * Not true, the wolfram page only says that the Gauss-Seidel method is applicable to diagonally dominant, or spd matrices, not "only" applicable to them. It's also trivial to see that such a statement would not be true, because for any lower triangular matrix $$A$$ with non-zero diagonal elements, the Gauss-Seidel method solves $$Ax=b$$ in the first iteration exactly. But there clearly are such lower triangle matrices that are neither diagonally dominant nor symmetric, e.g. $$A = \begin{bmatrix} 1 & 0 \\ 2 & 1 \end{bmatrix}$$
 * I changed the article accordingly (removed the word "only"). On a more general note: Given a decomposition $$A=M+N$$ where $$M$$ is regular, then $$Ax=b$$ is equivalent to $$x=M^{-1}b-M^{-1}Nx$$. Let $$c:=M^{-1}b, T:=-M^{-1}N$$, then $$x=Tx+c$$, which gives us a method for fixpoint iteration. Gauss-Seidel also is such a method, as can be seen in the wikipedia article. Such methods converge globally, if and only if $$\rho(T)<1$$, where $$\rho(T)$$ is the Spectral_radius of T. I think this should also be explained or linked to in this article, in the Jacobi_method article this is also explained. -- Mejiwa (talk) 17:56, 21 September 2010 (UTC)
 * The german wikipedia article states that a spectral radius below one even means that it converges linearly. This might be an important fact. Has anyone a reliable source on this outside of wikipedia? 131.246.191.250 (talk) 09:08, 3 February 2015 (UTC)
 * If you see Jacobi or Gauss-Seidel as preconditioned Richardson methods the proof that they converges for $$ \rho(I - P^{-1}A) < 1 $$ does state that they converge linearly since we show that $$ \frac{\|\underline{e}^{(k+1)}\|}{\|\underline{e}^{(k)}\|} \le \rho(I - P^{-1}A) < 1 $$. Having a convergence of order 1 is just the norm when solving linear equations. ReMarco (talk) 13:02, 28 January 2023 (UTC)