Talk:Ford–Fulkerson algorithm

After 1998 more steps …
I'm new to this, so I didnt change it directly because I'm not 100% sure. But the flow isn't increased by 1 but by min{capacies of edges on the found path}, so it should only take 1 step to fill one path up. —Preceding unsigned comment added by 85.180.69.27 (talk) 12:19, 20 February 2009 (UTC)

Algorithms by nature terminate. this article is full of references to "whether the algorithm terminates" and "a variation which is guaranteed to terminate". these phrases are problematic! is F-F not an algorithm at all, but an approach/procedure? Aaronbrick 04:34, 12 May 2006 (UTC)


 * This is a valid point. This entry should actually be called the Ford-Fulkerson Method.  It's not called an algorithm, because there is some part in the procedure which is left unspecified.  The Ford-Fulkerson Method says that you need to keep finding augmenting paths in the residual graph, but does not limit you to any specific way of finding those augmenting paths.  That is exactly why the procedure may never terminate, as described in the body of the article.  On the other hand, an algorithm should always terminate with an answer, or with announcing that no answer can be found.  That is the basic criteria for decidability, and an algorithm is formally defined to be a procedure for testing membership for decidable languages.  —Preceding unsigned comment added by 128.227.179.104 (talk) 02:58, 11 October 2007 (UTC)


 * We should use the name most commonly used in the field, regardless of whether it may be technically inaccurate accordingly to a particular definition of "algorithm." I have seen it called "the Ford-Fulkerson method" though, so this may be justified. Dcoetzee 00:10, 3 October 2008 (UTC)


 * As I know the term algorithm it doesnt have to terminate.


 * "We should use the name most commonly used in the field" Why? If it's an incorrect, but common term then just let that term redirect to the correct term. Wikipedia is enormously responsible for the terms people use "in the field", so it would just be responsible for perpetuating an incorrect term "in the field". As for "As I know the term algorithm it doesnt have to terminate", it seems you know it differently than the rest of us. Mangledorf (talk) 00:15, 13 November 2009 (UTC)


 * I concur with the next-to-last contributor. The definition of the concept "algorithm" should not postulate termination. Otherwise, due to the non-decidability of halting, you would have an unacceptable situation in which you cannot tell whether something is an algorithm or not unless you have a termination proof for it! Of course, given an algorithm, you are entitled to seek proofs that it terminates, or more generally that it is correct wrt to any given specification. This is the way the issue would be presented in a Computability textbook; admittedly, in Algorithms textbooks (where the Ford-Fulkerson method would often be found), one typically emphasizes that algorithms should be correct and, in particular, terminate, to the degree that it may be understood that an incorrect algorithm is not worthy of its name. But on a more foundational level, both correct and incorrect, terminating and non-terminating algorithms must be admitted. Moreover, non-terminating algorithms are useful in many practical situations, for example, when an algorithm cannot hit an optimal solution exactly in finite time, but only converge to the optimum; and the user may decide to stop the process at some point according to their resources or requirements.  See also Algorithm in Wikipedia.  AmirOnWiki (talk) 23:31, 10 February 2010 (UTC)


 * Regardless of the termination issue, I find the usage of "the Ford-Fulkerson method" to be a good choice, as it emphasizes that the sub-algorithm to find augmenting paths has been left unspecified. AmirOnWiki (talk) 04:37, 11 February 2010 (UTC)

Error in example_2.svg
The middle arrow from should point from C to B, not from B to C.

Errors in the Algorithm Section
There seems to be a slight error in the "Algorithm" section. There are three formulas with three informal descriptions next to them. The informal descriptions for the second and third seem to have been interchanged. In short, it is:

2. f(u,v)=-f(v,u) - We maintain the net flow.

3. sum(f(u,v))=0 for all nodes except s and t - The amount of flow into a node equals the flow out of the node.

whereas it should be:

2. f(u,v)=-f(v,u) - The amount of flow into a node equals the flow out of the node.

3. sum(f(u,v))=0 for all nodes except s and t - We maintain the net flow.

I would also suggest replacing "for all nodes" with "for all nodes u" for clarity.

Viridium 03:27, 10 April 2007 (UTC)


 * The wording is perhaps a bit unclear. By "maintaining the net flow", it is meant that if there are in reality going for example 5 trucks from $$u$$ to $$v$$, and 3 trucks from $$v$$ to $$u$$, then we maintain $$f(u,v)=2$$ and $$f(v,u)=-2$$, such that $$f(u,v)=-f(v,u)$$. For the third invariant, it might get clearer if written


 * $$\sum_v f(u,v) = 0 \Longleftrightarrow \sum_v f_-(u,v) + \sum_v f_+(u,v) = 0 \Longleftrightarrow \sum_v f_-(u,v) = \sum_v f_+(u,v) \Longleftrightarrow f_{in}(u) = f_{out}(u)$$


 * where $$f_-$$ is only the negative values, and $$f_+$$ only the positive. Klem fra Nils Grimsmo 05:39, 11 April 2007 (UTC

Nice clarification but some detail is wrong there (in equation above). There is a minus sign missing. Where this minus sign should go depends on which convention you use for the defintion of $$f_-(u,v)$$ : Is it the absolute value of the negative flow, or is it the actual negative flow complete with its numerical negativity?

In the example, I don't see how the possibility of flow directly along ABD is skipped at the second step, given that this example of bad behaviour is claiming to result from depth first search. Surely, in an alphabetic "depth first search", path ABD should be tried next -- before ACBD is tried! --Preceding unsigned comment left by 128.227.179.104 02:58, 11 October 2007 (UTC)

The depth-first search must not be alphabetic - it seems to be actively worst-case. That sort of thing can happen if the edges are stored in a nondeterministic, unordered set instead of a matrix. It's pretty unlikely, but that's what worst-case means. --Brilliand 22:58, 26 October 2007 (UTC)

Svend 11:36, 03 September 2011 (UTC)


 * In the mentioned lines above it now says:


 * Flow conservation: 		$$\sum_{w \in V} f(u,w) = 0$$ That is, unless u = s or w = t. The net flow to a node is zero, except for the source, which "produces" flow, and the sink, which "consumes" flow.


 * But I think this is slightly wrong. It should say "unless $$u \in \{s, t\}$$ or $$w \in \{s, t\}$$ or similar as f(t,w) would be negative (if there was any flow) and likewise f(u,s) would be negative.

counterexample
It would be nice to have the counterexample of when it doesn't terminate and converges to the wrong value.--MarSch (talk) 15:43, 2 October 2008 (UTC)


 * I found a likely reference for this (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.2479), but I don't know if I'll have time to write it up. --MarSch (talk) 15:50, 2 October 2008 (UTC)


 * The book "Matching Theory" by Plummer and Lovasz has it. Zerotalk 12:27, 15 October 2009 (UTC)


 * I added the counterexample. Svick (talk) 01:38, 16 October 2009 (UTC)


 * Thanks, but can you describe it more, please? The algorithm only converges to the wrong value if a particular sequence of augmenting paths is chosen. Zerotalk


 * I expanded that section. Do you think it is understandable and correct? (Like you pointed out, it wasn't before.) Svick (talk) 23:34, 9 November 2009 (UTC)


 * The flow converges to 3+2r, not 3. Also, I made the computations and I think that it's enough that M is at least 2 (not that it's important). —Preceding unsigned comment added by Mkonva (talk • contribs) 04:37, 2 December 2009 (UTC)


 * I checked the source I cited and it says that the flow converges to $$\textstyle 1 + 2\sum_{i=2}^\infty r^i = 3$$. But I think you are right and it should be $$3 + 2r$$, but maybe I misunderstood something. Could somebody read the paper and confirm that it's really wrong? If that is the case, we should find another source.
 * I think that you are correct that $$M = 2$$ is sufficient, but again, the source says it's $$4$$. Again, we should find another source.
 * Thanks for finding the (possible) errors. Svick (talk) 18:54, 2 December 2009 (UTC)

Python example
Isn't this more like the Edmonds–Karp algorithm? I thought the Ford Fulkerson keeps the selection of the paths unspecified... — Preceding unsigned comment added by 2001:9E8:1120:8C00:386F:F73E:7DE:BE89 (talk) 19:34, 3 April 2024 (UTC)

The python code is useless without an example call. In particular, it is not clear how to instantiate the class (and why should it be a class anyway). — Preceding unsigned comment added by 2001:9E8:1120:8C00:386F:F73E:7DE:BE89 (talk) 09:31, 3 April 2024 (UTC)

An anon made the following comment on the python program. Someone should check it.
 * "The code is incomplete and does not work at all. The understanding of language features of Python is very good. But the understanding of the Ford-Fulkerson algorithm is very poor. The DFS in find_path is not even correct." Zerotalk 12:23, 15 October 2009 (UTC)
 * I can see some problems in the  code (the path it returns isn't a simple path when it should and it doesn't run in optimal time), that sould be corrected, but I think that the whole code nevertheless works correctly. Svick (talk) 23:37, 15 October 2009 (UTC)
 * I may be wrong, but I see at least one more problem: why is  not the same as   in  ? Should both the edges not have the same capacity   instead of one being 0? musically_ut (talk) 07:04, 5 November 2009 (UTC)
 * No, this version is also correct. In directed network, edge from $$u$$ to $$v$$ with capacity $$c$$ doesn't mean that there is an edge from $$v$$ to $$u$$ with the same capacity. If your network is undirected (similar to real pipes), you have to add each edge twice, once for every direction. Your version would work too, but only for undirected networks. Svick (talk) 23:42, 9 November 2009 (UTC)

The following example demonstrates a bug in the code: The code incorrectly gives the flow value 12. The problem is that the self.flow variable can only handle one edge between any two nodes, whereas this is often not the case. When adding an arc (i,j) the code also adds the arc (j,i). If now the arc (j,i) is added by the user there is a problem. --Petter (talk) 15:20, 21 April 2010 (UTC)
 * I have now fixed that particular bug. --Petter (talk) 15:26, 21 April 2010 (UTC)

The problem noted above that  doesn't return a simple path can lead to a huge performance loss. In my application it took about 16 minutes to find some maximum flows in a graph. To force simple paths I added the following function: and replaced with This reduced the running time to under 3 seconds. Wouter - 12:40, 18 August 2010 (UTC) —Preceding unsigned comment added by 212.19.199.55 (talk)

Code looks like Erlang

 * The following comment was moved from the article (section Python implementation), where it doesn't belong. Svick (talk) 22:35, 3 November 2010 (UTC)

Is it just me or does this code look more like Erlang than Python? There's lots of tail recursion, pattern matching, list concatenation, etc, in this code. All of which are common programming practice in Erlang. — Preceding unsigned comment added by 75.92.222.10 (talk) 04:37, 3 November 2010 (UTC)

Why citations are in the Notes section?
I wonder why all my citations were placed in the "Notes" section. Is this an error that needs to be fixed? 09:21, 27 March 2015 (UTC) Kapil.xerox (talk) 09:21, 27 March 2015 (UTC)

Merge with Edmonds–Karp algorithm
The Edmonds–Karp algorithm is merely an implementation of Ford-Fulkerson that specifies that the augmenting path should be found by breadth-first search. The article on Edmonds-Karp is WP:Semi-duplicate and contains largely similar content, all of which really should be in this article. Merge Edmonds-Karp into this article? IntGrah (talk) 22:51, 12 April 2024 (UTC)
 * I'm leaning against this merge. It's a named algorithm that is commonly covered in algorithms textbooks (Introduction to Algorithms dedicates a few pages to it), and I'm not aware of any other named algorithm that gets that amount of coverage and doesn't have its own article. It's also more of a subtopic of the Ford–Fulkerson algorithm than a duplicate, and in my opinion the contents aren't similar enough to require merging. For example, the Ford–Fulkerson article could serve as a higher-level overview of the method by discussing the concept of finding augmenting paths in a residual network, why this method works (max-flow min-cut theorem), failure cases with irrational edge capacities, and its max-flow-dependent running time, while the Edmonds–Karp article could dive deeper into its running time (why it's polynomial and independent of maximum flow value) and include implementation details like pseudocode. Malerisch (talk) 02:46, 2 June 2024 (UTC)
 * In my opinion, the non-terminating case with irrational edge capacities is given undue weight in this article. I also think the discussion of time complexities should belong in this article; where else would a comparison of DFS (naive) and BFS (Edmonds–Karp) go? IntGrah (talk) 02:20, 8 June 2024 (UTC)
 * Although Introduction to Algorithms (4th ed.) only mentions irrational edge capacities briefly, most of the section covering the Ford–Fulkerson algorithm in Jeff Erickson's Algorithms is about this failure case, so I'm not sure that its coverage in this article is undue. As for DFS, I'd argue that neither article should really cover that in detail—after all, a key point of Ford–Fulkerson is that the method of finding an augmenting path is unspecified, rather than anything concrete like DFS. (Neither CLRS nor Erickson discuss DFS in much detail with Ford–Fulkerson; CLRS just mentions DFS briefly when stating that finding a path takes $$O(E)$$ time.) Instead, this article could explain why Ford–Fulkerson's running time is $$O(E|f^*|)$$ with integer capacities and mention Edmonds–Karp as an improvement; the Edmonds–Karp article could mention Ford–Fulkerson's running time and explain why BFS improves it to $$O(VE^2)$$. In other words, both articles would include a comparison of their algorithms' running times (some overlap is unavoidable), but each article would focus on its own algorithm. I think this aligns with how textbooks cover these two topics. Malerisch (talk) 10:08, 8 June 2024 (UTC)