Talk:Essential complexity

2007-02-1 Automated pywikipediabot message
--CopyToWiktionaryBot 11:16, 1 February 2007 (UTC)

Merge with Cyclomatic complexity
I say go ahead and merge it. Same with Accidental complexity. Erudecorp ? * 04:20, 2 December 2007 (UTC)

Accidental and essential complexity are not solely related to cyclomatic complexity. I would prefer they were left alone or merged with the discussion of No_Silver_Bullet --17:23, 7 January 2008 (UTC)Neil (talk)

No merge. The only relation between cyclomatic complexity and essential/accidential complexity is that all three talk about complexity in computer programming. It would be reasonable to merge essential and accidental complexity, though. EivindEklund (talk) 08:04, 15 January 2008 (UTC)

Confusing Example
Can someone please annotate the examples. Explain "why" the first one has a complexity of 1, and how the code can look reduced?

Same applies for the second example, it'd be nice to have an explanation as to "why" the complexity is more than 1. It's obviously a complex looking program, but that doesn't explain why it's an essential complexity.

Cyclomatic complexity section is unrelated to the rest of the article.
It seems that this article discusses two completely unrelated ideas that share a name by coincidence.

The lead section discusses complexity which is inherent to a problem, i.e. that no degree of cleverness can eliminate.

However, the cyclomatic complexity section defines a metric of complexity which is anything but essential. Structured control flow is complete; any program written with a GOTO can be re-written using only structured control flow and without a GOTO.

Said another way, given any example program with an essential complexity>0, there is an equivalent program with essential complexity==0. The essential complexity metric measures something that is not essential to the problem. To demonstrate,

This original example seems complex because it contains GOTO statements: for(i=0;i<m;i++) { for(j=0;j<n;j++) { if(z[i][j] != 0) goto non_zero; }                                                                                                                     goto found; non_zero: }                                                                                                                     i = -1; found:

That complexity is an illusion. The following code is equivalent and contains only structured control flow: i=-1; for(int ii=0; ii<m && i<0; ++ii) {                                                                                                                       i=ii; for(j=0; j<n; ++j) if( z[ii][j] != 0 ) i=-1; }

I recommend that this subsection be torn out, and replaced with a 'See also Cyclomatic Complexity'  Thanks, 140.180.249.29 (talk) 16:48, 19 June 2014 (UTC)
 * The wikipedia articles in this area sucked probably still need work, but the problem is not what you write above. (In simple words, you are wrong.) The actual problem is that notion of reducibility used by B-J and that used by Kosaraju and McCabe are different! B-J allows the introduction of extra variables (and extra predicates using them), while K and McCabe do not allow this. I've expanded the article on B-J to explain this better, I hope. Kozen's 2008 paper tile "The Böhm–Jacopini Theorem Is False, Propositionally" says it all, really. 188.27.81.64 (talk) 20:38, 16 July 2014 (UTC)
 * In your transformation, you have duplicated code (the  block) and you have also introduced a new variable   which you not only assign to, but also test. None of these transformations is allowed under the McCabe or Kosaraju reducibility rules. They are allowed under the Böhm and Jacopini reducibility rules. 188.27.81.64 (talk) 05:59, 17 July 2014 (UTC)
 * I do agree however with you first point, namely that I can't see much commonality between the notions mentioned by Brooks and the one by McCabe. I've therefore added a split template to the article. I don't know much about the Brooks notion, but I suspect it could be merged with accidental complexity and the page titled essential complexity and accidental complexity or something like that. 188.27.81.64 (talk) 05:55, 17 July 2014 (UTC)

Prior work
I'm surprised that a journal published McCabe's paper (in 1976) without noticing that his section VI (about the irreducible graphs), minus the proposed measure, duplicated the [more extensive] findings of the compiler guys, published some 4 years before, namely Flow graph reducibility by Hecht and Ullman, which appeared both at STOC'72 and also in SIAM J. of Comput. in the same year ( and ). Hecht and Ullman introduce there their "Collapsibility" which does what McCabe's reduction does. 188.27.81.64 (talk) 02:06, 21 July 2014 (UTC)