Talk:Hungarian algorithm

The title of the "Theory" section is not quite right. I can't see that it is any more theoretical than other sections. What it does is that it decsribes the algorithm. Unfortunately the description is not self-contained, and thus completely unreachable for non-specialist. Actually the terse formulation and references to concepts not introduced (the matrix, prime zeros, etc) make me think that this is a violation of copyright (a paragraph taken from some larger work) Wazow 17:44, 4 March 2006 (UTC)


 * This is an article on the Hungarian method, not a tutorial on Operations Research. The reader is assumed to have sufficient knowledge on the subject in order to understand what it is about. Terms such as 'matrix' and 'prime zeros' are no more copyrighted than 'algorithm', it's standard terminology on the topic, so I don't know what you're on about. Miskin 14:34, 24 March 2006 (UTC)

The article should describe the problem solved by the algorithm briefly (not only refer to the problem description elswhere). It is impossible to describe the solution well, if the key objects in the problem are not defined in the text. Wazow 17:45, 4 March 2006 (UTC)


 * Actually I don't understand what you're talking about. To my knowledge there's no reference to any problems elsewhere. Again, you're asking a tutorial on optimization, which is beyong the scope of the article (for the obvious reasons). Miskin 14:34, 24 March 2006 (UTC)

Macrakis' removal of the algorithm section (which I didn't notice) did make the article largely uncomprehensible. Still, the rest of what you said (about copyrighted terminology, not enough background theory etc) remain irrational. Miskin 14:53, 24 March 2006 (UTC)

I am not sure on which matrix exactly we are supposed to work. Suppose I want to find a matching of maximum weight in a weighted bipartite graph; shall I use the algorithm directly on the adjacency matrix of that graph? Or do I have to build a matrix whose rows are attached to the first vertex set V1, whose columns are attached to the second vertex set V2, and whose entries contain the weight of edges from V1 to V2 (this would therefore be a submatrix of the adjacency matrix)? Bender05 10:29, 21 March 2006 (UTC)
 * Answering my question: the lines of the matrix stand for the first vertex set, and the columns stand for the second vertex set. Bender05 13:16, 24 March 2006 (UTC)

This is already described in the modeling section although the answer should be straight-forward to someone who has a good understanding of the algorithm. It works on a matrix which can be modelized as a bi-partite graph and vice versa. By default, each k(i,j) entry on the matrix has the value of the flow of the equivalent arc a(i,j) (that is the arc joining the nodes i and j). The absence of an arc is equivalent to the presence of an arc of zero flow, hence the value in the matrix will be zero. The point of the Hungarian method is that you don't need to bother yourself with flow-networks. There's a technique to spot the "independent" zeros on the fly. Each state can be theoretically modeled on a bi-partite graph, but this requires the use of an extra algorithm (usually Ford and Fulkerson). This expansion of the algorithm is called Primal-Dual. Miskin 14:34, 24 March 2006 (UTC)

Moderators: can't you add some mathematical abilities to wiki so this scientific stuff can be done properly? Egnever 15:42, 24 March 2007 (UTC)

I removed Gaussian elimination and the references to Gaussian elimination because Gaussian elimination is not used in this algorithm. I am going to remove the reference from primed to the article Matrix (Mathematics) because the word primed does not appear in that article.

The modeling section could benefit from rewriting. How does one subtract a value from an infinite value yet retain in some way its relative value? Also, what does one do in the case of i workers and j jobs, when i does not equal j?  If these things were explained or elaborated upon the section would be more useful.

The algorithm section could also benefit from rewriting. The term starred zero is undefined. I suspect that it is a reference to the algorithm as originally published, but it has no meaning in its current context. The algorithm section is incomplete without a description of how to produce more zeros in the matrix. The method is NOT Gaussian elimination. I could not find any meaning for I/O capacity equal to 1 or to independent zeros. In the context of Wikipedia, these have no meaning.

I found the sections titled Bipartite Graph Representation and A Minimization Problem to be self-contained (with their references) and useful, and appreciate these contributions. Aostrander (talk) 22:31, 24 March 2008 (UTC)

It might be worth noting that Kuhn credits Jacobi with having essentially invented the algorithm a century earlier, though the connection wasn't realized until recently. --Delirium (talk) 04:51, 13 August 2008 (UTC)

In the section "The algorithm in terms of bipartite graphs", the argument for Delta being positive is unsatisfactory. While it is clear, that there is no edge from Z cup S to T\Z, it is not at all clear that there is no edge in the other direction. —Preceding unsigned comment added by 129.67.168.205 (talk) 11:03, 17 May 2009 (UTC)

Aweful example, incomplete... —Preceding unsigned comment added by 84.226.159.223 (talk) 13:45, 3 September 2009 (UTC)

The "Matrix interpretation" Section needs to be rewriten. Gready implementation of step 3 does not work in every case ! well writen O(n^3) description: http://csclab.murraystate.edu/bob.pilgrim/445/munkres.html —Preceding unsigned comment added by ArturDwornik (talk • contribs) 18:29, 1 February 2010 (UTC)

Hi; I'm not sure that I've interpreted the description of the matrix implementation correctly- when I try this on the matrix [3,0,1;0,2,0;3,1,3], Step 1 reduces the bottom row by 1, but all of the other steps leave the matrix unchanged, right? As I have it, these are the other effects: Step 2: each column is decreased by zero. Step 3: Zeros are chosen ([middle-left or middle-right] and [top-centre or bottom-centre]), which means the only row without an assignment is either the top or the bottom, so I mark the centre column and thus whichever of the top/bottom rows I didn't already mark. So the only two unmarked values are the zeros in the middle row. Step 4: I subtract zero from each of the zeros in the middle row and add zero to the zeros in the centre column. When I repeat to Step 1, each row has a zero minimum so that's stopped helping too. — Preceding unsigned comment added by 71.79.250.78 (talk) 00:01, 12 March 2012 (UTC)

I don't think the example does the problem justice, because you could just take the row wise argmin to get the solution. Yes the Hungarian Algorithm could be used here, but it doesn't explain the specific combinatorial optimization benefit — Preceding unsigned comment added by 193.174.55.143 (talk) 12:38, 2 March 2022 (UTC)

Matrix Interpretation—Step 3
Am I the only one who feels that textual references to rows and columns correspond to columns and rows in the diagrams? That is: we are reading rows when we mean columns, and vice versa. We are directed to 'Mark all rows having no assignments (row 1),' but isn't it column 1 that lacks a clear—ie, unambiguous—assignment, due to its containing two zeros?

I could easily be mistaken about this, because it's a long time since I learned the Hungarian Method. I'll see how things look after a review of my old notes! Aboctok (talk) 21:32, 20 May 2011 (UTC)

I don't know if this is the cause of my confusion, but I too am confused by step 3. What does it mean to "first assign as many tasks as possible" How are we to do this? Isn't this equivalent to performing step 3? 75.82.11.172 (talk) 07:55, 16 March 2014 (UTC)

Matrix Interpretation-Step 3
I feel that the algorithm as presented is incorrect. Suppose we try to perform it on the following matrix:

$$ \begin{pmatrix} 0'& 0 & 1\\ 1& 0'& 0\\ 0 & 1 & 1\end{pmatrix} $$

Then we first mark row 3, then column 1, then row 1. Then we end up drawing two lines: through row 2 and column 1. This clearly does not cover all of the zeros. — Preceding unsigned comment added by Batconjurer (talk • contribs) 12:00, 12 February 2018 (UTC)

No, just keep going: Row 3 -> Col 1 -> Row 1 -> Col 2 -> Row 2 -> Col 3 so you will draw a line through the three columns. --129.97.124.120 (talk) 20:49, 8 October 2019 (UTC)

n^3 time
Page says "however Edmonds and Karp, and independently Tomizawa noticed that it can be modified to achieve an O(n^3) running time." Is there a citation for this? — Preceding unsigned comment added by 87.212.207.8 (talk) 14:06, 27 April 2016 (UTC)

I have not found citations, but I think I have found a variation on the bipartite graph algorithm that runs in O(n^3) time. I post it here first for some checking because I am not yet sure enough to modify the main article. The algorithm still uses potentials as in the main article, but speeds up the modification of potential between matchings. Define the weight of an edge in G as w(u,v)=c(u,v)-y(u)-y(v), the amount that the potential of that edge can be increased.

I think this is correct and has complexity O(n^3)
 * 1) (Calculate distances) Run Dijkstra's over G from all unmatched vertices in S, for each vertex (i) keep track of distance to start (d_i) and the previous vertex in the shortest path from the start (p_i)
 * 2) (Modify potential)    For all vertices u in S subtract d_u from it's potential. For all vertices v in S add d_v to it's potential. (reversed compared to article)
 * 3) (Add match)           Choose an unmatched v in T that has the lowest d_t, reverse all edges on the path from v to the start (using the previous pointers)

1. Potential property still holds: Suppose we have a matched edge (v,u) with v in T and u in S. Then d_v=d_u because (v,u) is the only edge entering u and has weight 0 (because it is tight). then the new y(u)+y(v) becomes y(u)+y(v)-d_u+d_v=y(u)+y(v)=c(u,v) Suppose we have an unmatched edge (u,v) with u in S and v in T. Then d_v ≤ d_u + w(u,v) therefore the new y(u)+y(v) becomes y(u)+y(v)-d_u+d_v≤y(u)+y(v)-d_u+d_u+w(u,v)=y(u)+y(v)+w(u,v)=c(u,v) This covers both proofs in the wikipedia article

2. Each round a new match is made: The path to the start from t described by the p_i's is the shortest path. The following statements are when going forward (from some unmatched s in S to some unmatched t in T) Let (u,v) be an edge on this path from S to T. Then d_v = d_u + w(u,v) because otherwise the path would have been shorter. And thus the new potential becomes tight: y(u)+y(v)+w(u,v) = c(u,v) Let (v,u) be an edge on this path from T to S. Then d_v = d_u because it was already tight, the potential doesn't change -> still tight All edges on our path are now tight and form an augmenting path, we get a new match.

Spaanse aak (talk) 19:25, 19 March 2020 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Hungarian algorithm. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20060812030313/http://www.math.uwo.ca/~mdawes/courses/344/kuhn-munkres.pdf to http://www.math.uwo.ca/~mdawes/courses/344/kuhn-munkres.pdf

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 20:45, 8 November 2017 (UTC)

step 3
The algorighm in step 3 is not true to all of the matrixes. — Preceding unsigned comment added by 132.70.66.14 (talk) 13:18, 19 December 2017 (UTC)

Indeed: here's a small example (maybe smaller exist):

$$ \begin{pmatrix} 1 & 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 & 1 \\ \end{pmatrix} $$

This will assign rows 1, 3, 4 and 5. Then it will mark: row 2 -> col 5 -> row (1 & 3) -> col (2 & 3) -> row (4 & 5) -> col (1 & 4) so that now everything is marked and 5 (vertical) lines are drawn. Yet, 4 lines are sufficient: col 5 and rows 3, 4 and 5. --129.97.124.120 (talk) 20:55, 8 October 2019 (UTC)

The_algorithm_in_terms_of_bipartite_graphs, possible error? "(note that the number of tight edges does not necessarily increase)"
Section The_algorithm_in_terms_of_bipartite_graphs says: (Precondition P:) "If {\displaystyle R_{T}\cap Z}R_T \cap Z is empty, then let " [...] (Note N:) "(note that the number of tight edges does not necessarily increase)."

I can't believe that this Note N holds under Precondition P. Also, the section "Proof that the algorithm makes progress" below says:

"that is, to either increase the number of matched edges, or tighten at least one edge"

I understand the "tighten at least one edge"-Part as exactly the Precondition-P-part from above !? (Also I assume the "increase the number of matched edges" probably relates to "If {\displaystyle R_{T}\cap Z}R_T \cap Z is nonempty..." from above?)

As you can infer, I am missing some connections... the "If {\displaystyle R_{T}\cap Z}R_T \cap Z is nonempty"-Part should be labeled something like "augmenting case", so that it could be related better to that mentioned corresponding part in the proof; or Z could be introduced as something like "the set of nodes that could possibly participate in an augmentation path", with relation to Berge's Lemma, to help unterstanding the purpose of Z from the start on. Fjf2002 (talk) 19:26, 14 March 2021 (UTC)

Runtime of Matrix Interpretation
It is unclear to me how to prove that step 5 of the matrix interpretation makes progress, considering that it does not appear to be equivalent to the potential adjustment in the bipartite graph interpretation. B52481 (talk) 00:31, 14 May 2023 (UTC)
 * To prove progress, look at the sum of the whole matrix after each iteration. Each step 5 will lower the total sum as the (decreased) uncovered cells always outnumber the (increased) double covered zeroes.
 * The total decreases with each iteration until a solution is found. Typically a solution is found before the total reaches zero.
 * An increasing minimum number of required lines is another sign of progress. Yet not all progress shows this way. The minimum number of lines may stay constant during several iterations. The number of lines never goes down though. Uwappa (talk) 09:46, 14 May 2023 (UTC)
 * Makes sense. But can you prove that the number of iterations upper bounded by a polynomial in $$n$$? B52481 (talk) 15:03, 14 May 2023 (UTC)
 * I just added changes to the section (intro and step 5) which should hopefully clarify the matter. I didn't write a proof for the polynomial runtime, but I added a reference to the paper where it can be found. Let me know if the topic needs further explanation. SKOgoras (talk) 16:46, 31 March 2024 (UTC)

"loop" in step 3?
In step 3 it says "Repeat this till a closed loop is obtained." What is meant by loop? We just have a matrix with zero and nonzero elements. (I know a different algorithm where you have to "cover" all 0 by selecting a minimum number of rows and/or columns. You proceed by adding that row/col that removes a max. number of zeros not covered yet.) &mdash; MFH:Talk 15:03, 18 January 2024 (UTC)