Talk:Coppersmith–Winograd algorithm

algorithm
This should probably be tagged as a stub since it doesn't contain the actual algorithm nor much of a discussion past the big O complexity. I'd do it, but I don't know the algorithm. Lavid 19:46, 25 February 2007 (UTC) lavid

Actually, I do not think anyone has ever written down the full algorithm. The paper only proves that such an algorithm exists. I do not think we can provide an explanation of the technique here without writing an essay as complex as Coppersmith and Winograd's paper. Publishing a short introduction here and a link to the paper is probably the best we can do. 192.167.206.227 14:55, 15 May 2007 (UTC)


 * Is this the same as the Winograd algorithm for matrix multiplication? I'm reading about Winograd algorithm in one book right now, and it has quite concise explanation of how the algorithm works. I could give this explanation here, but for some reason I'm not quite sure if it is the same thing. The book is available on Internet (here's the link), however it might not be of much use to you since it's in Serbian :P -- Obradovi&#263; Goran ( t al k  00:19, 19 July 2007 (UTC)


 * According to Strassen algorithm, there is an algorithm due to Winograd in 1980 and another one published in 1990 by Coppersmith and Winograd. Could it be that your book describes the 1980 algorithm? That would also be worthwhile to describe. Winograd's 1980 algorithm is described at http://www.f.kth.se/~f95-eeh/exjobb/background.html . Alternatively, if you give me the page in your book where I should look at, I could have a look - formulas are the same whatever language the book is written. -- Jitse Niesen (talk) 13:14, 23 July 2007 (UTC)


 * Yes, it appears that it is the 1980 algorithm. It is said in Serbian text (which is the main reason why I had my doubts that it is this algorithm), that Strassen algorithm is more efficient than Winograd algorithm. The algorithm is described on the page 212 of the book (220 in .pdf) - just search for winograd. I will translate here the description, since it is very short:

Because of simplicity, let us assume that n is even number. Let's introduce the notation

$$P_i = \sum_{k=1}^{n/2} p_{i, 2k-1} p_{i, 2k}; i = 1, 2, ... , n,$$

$$Q_j = \sum_{k=1}^{n/2} q_{2k-1, j}q_{2k, j}; j = 1, 2, ... , n.$$

By regrouping, we get

$$r_{ij} = \sum_{k=1}^{n/2}(p_{i, 2k-1} + q_{2k, j})(p_{i, 2k} + q_{2k-1, j}) - P_i - Q_j$$

Numbers $$P_i$$ and $$Q_j$$ are calculated only once fore each row P, and column Q, which takes only n^2 multiplications. Total number of multiplications is therefore decreased to $$n^3/2 + n^2$$. Number of additions is increased for approximately n^3/2. The algorithm is therefore better than direct one in case when addition is faster than multiplication (which is usually the case).

Comment. Winograd algorithm shows that changing of the order of calculation may decrease the number of calculations, even in expressions like matrix multiplication, which are simple in form. Next algorithm [Strassen] exploits the same idea much more efficiently.

-- Obradovi&#263; Goran ( t al k  20:08, 17 May 2008 (UTC)

The article currently suggests that the algorithm does exist explicitly but doesn't give it with no explanation as to why. Having read the 1990 paper, it looks like this is because the algorithm is not explicitly constructed, only its existence is given. Can this be made clearer in the article? The suggestion that the algorithm isn't used in practice because it is slow for plausibly sized matrices is also questionable, is it not the case that the algorithm isn't used in practice because it hasn't been explicitly constructed [and possibly also too slow]? --131.111.213.41 (talk) 03:28, 12 June 2009 (UTC)

?!
''The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. ''

How does this work? --Abdull (talk) 11:37, 9 August 2008 (UTC)


 * You reduce your problem to one or many matrix multiplications, for example this. --Mellum (talk) 19:43, 10 August 2008 (UTC)

A 2.496 algorithm by Pan?
The Numerical recipes book (at the end of chapter 2) mentions an article from 1984 in SIAM review (vol 26 pp 393--415) that proved a big-O of 2.496 was possible. I haven't found a mention yet of this bound on wikipedia. It's worth mentioning (if it was a valid paper... I'm not a member of SIAM to even read it) as it was a better algorithm than the strassen one for a while before this one. Jason Quinn (talk) 21:43, 28 January 2009 (UTC)

Pseudocode?
Hi, I was wondering if someone could include the pseudocode for the Coppersmith-Winograd algorithm (if no one wants to do it, I would actually like to know where I can find it for my own interests).

Thanks, (Dr. Megadeth (talk) 01:24, 5 May 2009 (UTC))

Bounds
This article needs serious refurbishment. It is obvious that the minimum exponent is 2 because all elements of the matrix need to be read. Group theory is not necessary to prove this. By definition, matrix multiplication requires all elements of BOTH multiplicands. 129.97.120.84 (talk) 22:37, 12 November 2010 (UTC)


 * That is exactly what it says currently. --mellum (talk) 12:06, 16 November 2010 (UTC)

The point about needing to read all elements of both matrices gives us a lower *bound* on the minimum possible exponent, but it doesn't tell us that the *optimal* exponent is 2. In principle, it might be that it is mathematically impossible to do better than an exponent of, say, 2.1. I believe the conjecture is that there is no such problem, i.e. that, for each positive epsilon, there exists an algorithm whose exponent is not more than 2 + epsilon. Jumpers for goalposts, enduring image (talk) 15:14, 22 May 2019 (UTC)

'Big News' update
There was previously a sentence describing the existence of some other matrix multiplication algorithm. The boldface and the strange wording lead me to believe that it might be a mis-edit or a draft-note. The sentence is quoted verbatim here: "Big News: Very recently an algorithm with only O(n^1.401) multiplications was found. However only, when the original reading of the matrices and the output are not taken into account." I will try to verify the claims by above. Cheuk Sang Rudolf Lai (talk) 13:38, 1 April 2011 (UTC)

Failed to achieve the article
The link to the article of the O(n^2.3727) algorithm is already unavailable. ---Simonmysun (talk) 01:00, 21 August 2013 (UTC)


 * Thank you for noticing, the link is fixed now.&thinsp;&mdash;&thinsp;Pt&thinsp;(T) 13:35, 22 August 2013 (UTC)

'fastest known algorithm for square matrix multiplication until 2010'?
The article says

>was the asymptotically fastest known algorithm for square matrix multiplication until 2010.

However there is no citation or information explaining what said algorithm is. — Preceding unsigned comment added by 143.167.10.35 (talk) 14:32, 23 September 2015 (UTC)

Pop Culture reference
This algorithm was mentioned obliquely (by the exponents of the runtime) in a recent SMBC comic. Not sure if that's worth a subsection or not on the article. 2620:0:1013:11:514D:10E:D51B:9F5A (talk) 16:27, 14 October 2019 (UTC)

That's how I got here... Last time I checked they (habitual wikipedia colaborators) were frowning upon "pop culture references", but haven't read recent discussions about it. --AstroNomer (talk) 18:22, 15 October 2019 (UTC)