Talk:Computational complexity theory

older entries
I wonder where would be a good place to mention that we know some problems not in P, for instance Presburger arithmetic. --AxelBoldt

I've added it to Complexity classes P and NP. It should also be added to EXPTIME, whenever someone gets around to writing it. I would put it under P too, except that's more of a redirect than a real article. --LC

I've added a note explaining (in brief) what the Co classes are. What do you think about trying to consolidate many of these articles? For example, NP and Co-NP should probably be the same article, as should NP-complete and Co-NP-complete

GulDan 18:10, 31 Jul 2003 (UTC)

Suggestions for real-life examples
There are several problems that seem to have a high computational complexity, but are not yet fully computerized. One example is payroll processing. The computational complexity of payroll processing is made unnecessarily high by various tax laws. Splitting such taxes as social security between the employer and employee is economically pointless and serves only to increase the complexity of payroll processing. (These taxes really ought to be paid by the employees so that they have a more honest conceptual understanding of the full tax burden they bear. Besides, payroll processing is taxing enough without having to pay payroll taxes.)  Various state income tax schemes further complicate the process, as do laws such as the Davis-Bacon Act that require the payment of different wages for different jobs.

Another example is the computation of efficient paths of transport for freight and passengers. The highway transportation system solves this problem by parallelizing the computation among the various drivers. Other situations such as ports, shipping and receiving departments, airports, and train depots often have more centralized control that results in an extremely high concentration of computational complexity and the need for great speed in real time. Hence the high security often employed in such situations and the need to avoid distracting workers such as longshoremen from their jobs. Perhaps these situations would benefit from the further study of computational complexity theory.

Featured article removal
This article was removed from being a featured article with the following comments that may be helpful in improving the article:

How on Earth did this get featured? No mention of models of computation. No mention of Turing machines! No explanation of non-determinism. Nothing about randomized algorithms. Nothing about parallel computation and the structure of P. Nothing about reduction. No pictures. No history. Gdr 08:40, 2004 Jul 26 (UTC)
 * Support removal. The weird thing is that I don't even see this article on Featured article candidates/Featured log.  When the WP:FA page was new, a lot of people added articles themselves, as the candidate process was kind of unclear. - DropDeadGorgias (talk) 17:02, Jul 26, 2004 (UTC)
 * Support removal; this is nowhere near comprehensive. &mdash; Matt 17:32, 27 Jul 2004 (UTC)
 * Remove posthaste. The "notable researchers" section is particularly disturbing; you can hardly do such a list justice, certainly not with only a dozen names. +sj +  05:10, 11 Aug 2004 (UTC)

"Invitation" to work on questionable off-topic article
Work is currently in progress on a page entitled Views of Creationists and mainstream scientists compared. Also currently being worked upon is NPOV (Comparison of views in science) giving guidelines for this type of page. It is meant to be a set of guidelines for NPOV in this setting. People knowledgable in many areas of science and the philosophy of science are greatly needed here. And all are needed to ensure the guidelines correctly represent NPOV in this setting. :) Barnaby dawson 21:12, 29 Dec 2004 (UTC)
 * We don't need a plethora of articles detailing everybody's views on this highly controversial topic. Wikipedia is an encyclopedia, not a platform for expressing various distorted politico-quasi-religious pseudoscientific views.  For further information on creationism, the reader is referred to the Bible.  There is an article on so-called intelligent design, which already pushes the limits of what is acceptable on Wikipedia, although the "concept" does seem to be widely reported in the mass media.  And what on earth has this got to do with computational complexity theory? -- 130.94.162.61 15:17, 24 December 2005 (UTC)
 * This is clearly moot now. Barnaby dawson 17:52, 24 December 2005 (UTC)

Merge notices
Notices were added requesting a merge from DSPACE and DTIME. I have expanded both of those articles significantly, and believe they should stand alone as articles. Since there were no rationales given for the merge, I have removed the merge notices; I would of course be happy to discuss this further if someone would still like to consider a merge. -- Creidieki 21:56, 2 April 2006 (UTC)

Not sure how I forgot to discuss it here, but here goes. IMO the two concepts DTIME and DSPACE are too similar to themselves and to this article, and there isn't a lot said about them not already said (or should be said) in this article. The author of these articles agree with me (see his talk page).

BTW, I'm not entirely sure on the procuedure, but aren't merge notices supposed to be discussed before removed?

Ripper234 15:39, 3 April 2006 (UTC)


 * The correct procedure is to change them to . While considering myself a bit of a mergist, I'm inclinded to say that keeping them seperate may be preferable over merging. First of all I think it would be nice to see all complexity classes in Category:Complexity classes, and not miss a few due to being combined with this article. Secondly I think there is some room for expansion of DSPACE and DTIME. Thirdly, I hope this article will once be a good introductory article, I feel that an larger treatment of several complexity classes might hinder this. Of course, I'm always open to counter-arguments. Cheers, —Ruud 16:27, 3 April 2006 (UTC)
 * I'm sorry if I didn't follow procedure; I hadn't been able to find any discussion on the proposed merge. I agree that the two complexity measures (not classes) DTIME and DSPACE are similar to each other, and that they're very important entities to complexity theory.  I do think that they deserve separate articles, because I think that there's enough to say about them to require that, and that there are plenty of articles that should link to DTIME and DSPACE separately from complexity theory (for example, P) is a certain amount of deterministic time, and should mention so.  I'll try to expand those articles and make the distinctions a little more clear today.  -- Creidieki 17:14, 3 April 2006 (UTC)


 * I think you could define what a resource bound is in terms of big-O and whatnot, and then define DTIME to be a bound on runtime, DSPACE a bound on space. Maybe also insert other bounds, like communication (not sure regarding the exact name/definition of communication complexity). I agree most individual complexity classes (and there are tons of them) do not belong here, but this is part of the very definition of complexity class. On a side note, what do you think about changing the name from DTIME to Time Complexity. Hmm, after writing this line, I went and checked the article on "Time complexity" ... guess what - it's a redirect here (although Space complexity is a dictionary definition and should be removed. I replaced it with a redirect here for now). Ripper234 16:56, 4 April 2006 (UTC)


 * I'm a little bit confused by your post. DTIME and DSPACE are not "part of the very definition of complexity classes"; they're computation resources on a specific model of abstract machine.  You seem to be confusing them with computation time and memory space, which hold on almost any model of Turing machine (nondeterministic, conondeterministic, quantum, probabilistic, whatever).  I'd be happy to change the name from "DTIME" to "Deterministic time", but I really believe that "computation time" should be a separate article.  My vision for this hierarchy of articles is:
 * Computation time describes "the number of steps on (any kind of) Turing machine", and talks about the various time-complexity classes on different abstract machines. Compares DTIME and NTIME, etc.
 * DTIME describes computation time on a deterministic machine. Talks about the deterministic-time hierarchy, the classes defined in terms of deterministic time, relationship to other resources, etc.
 * P describes the specific complexity class of polynomial amount of DTIME. Talks about complete problems, applications to feasibility, relations to other classes.
 * Deterministic Turing machine describes the model of abstract machine, and goes somewhat into the time and space complexity classes, and how the model relates to other models.
 * All of these concepts are separate. They are studied separately, and they can easily support separate articles. I've been trying to teach myself complexity theory, and I've found it very confusing to keep track of the different ideas in all of these.  I think that having separate articles is going to be more helpful.  Does that sound like a reasonable division of topics?  -- Creidieki 17:18, 4 April 2006 (UTC)


 * Accepted. I removed the merge notices. Ripper234 05:55, 7 April 2006 (UTC)

Translate German WP article?
It seems that this article would benefit considerably from material contained in the German WP article. 213.3.104.34 07:58, 5 April 2006 (UTC)
 * If you can convince someone to provide a translation, I would be happy to incorporate the material into the English article. Unfortunately, I do not speak German.  -- Creidieki 19:29, 5 April 2006 (UTC)


 * While I know we generally frown upon automated translation services, this link will translate the German page fairly well. Because of the way the links are set up, if you click on a wikilink, you will be sent to a translated version of the target page. --Carl (talk 03:57, 15 September 2006 (UTC)
 * I just translated some portions of the German article. I omitted large chunks; nevertheless I think that the boilerplate should go here:


 * --Hermel (talk) 12:27, 26 September 2009 (UTC)

The article is beginning to look a lot better now that Hermel has begun translating from the German article. I think this image from the German Wikipedia is excellent:. If someone who speaks German also agrees that this would make a great addition to the English WP, please go ahead and translate it. (Since it's an SVG, this is really easy. Just open the file in a text editor and replace the German names with English names.) --Robin (talk) 02:11, 28 September 2009 (UTC)

Good Article nomination has failed
The Good article nomination for has failed
 * I think the reasons that led the article from losing FA status (which it probably should never have had in the first place) are still reasons not to give the article GA status. I know most people don't read german but if you look at the german page you can still see a much more elaborate structure, pictures and so on. The current article fails the GA criteria in a number of ways: slightly narrow focus, certainly less than "compelling prose", not enough references and so on. As someone who works in the field, there are also a number of things in the article that make me cringe like "The most important complete set is NP-complete." Well, so fix it I guess... but in the meantime it's way too early to give the article GA status.

Pascal.Tesson 03:34, 11 July 2006 (UTC)

Fixing this up
I am working on re-writing at least parts of this article. I understand computational complexity theory well, though I am by far not an expert on it, so my knowledge of some specific areas are lacking. Feel free to help out, intervene or to give suggestions.--Konst.able 09:00, 9 October 2006 (UTC)

I changed the introduction to be less technical and contain more applications to the "real world". I think this is more to the proposed standard. Some of the replaced material better belongs in the sections to which it pertains. I'll contribute more as I find time. Scottcraig 18:03, 17 October 2006 (UTC)

Fixing this up again
I noticed this article in the "pages needing attention" section of the Computer Science project page. I understand that it's already undergone one major rewrite (which I will look over in detail when I have some time) but I have some thoughts off the top of my head that I'd like to share:


 * 1) Shouldn't this article be called "Run-time complexity theory"?
 * 2) In the opening graf, "scalability" is used to describe algorithm complexity.  While this is perhaps an apt analogy, isn't it technically inaccurate to equate the two concepts?  "The capacity to handle increasing workloads/throughput" isn't quite the same thing as "the increase in running time as input size increases".
 * 3) The Big-O function is mentioned in passing.  Small-O, Omega and Theta are not mentioned at all.  Neither are the concepts of monotonically increasing functions or asymptotic limits.  Shouldn't these concepts be central to the article?
 * 4) I didn't notice any discussion of why theoretical run-time complexity classes are superior to empirical measurements or benchmarks.  When I learned run-time complexity, a "classic" example we were given right off the bat was a side-by-side comparison of running times for two machines running two different sorting algorithms.  Machine A was the equivalent of a 1980's TRS-80, running an O(n lg n) sort.  Machine B was a state-of-the-art (at the time) Pentium 4 running an O(n3) sort.  For small sizes of n, Machine B was light-years faster, but as the size of n increased, Machine A eventually came to pwn Machine B.  I'd like to see a chart like that in the article.

I'll give the page a thorough read and make more detailed suggestions later. Groupthink 13:16, 26 June 2007 (UTC)

Run-time complexity
On second thought, run-time complexity really deserves its own article, and the thrust of this article should remain P- and NP-completeness. Thoughts? Groupthink 02:04, 27 June 2007 (UTC)
 * I've added a run-time analysis article. Groupthink 21:02, 28 June 2007 (UTC)


 * I disagree. There is already a P/NP page elsewhere. And the majority of complexity theory deals with run-time complexity. P/NP is only included here because of its fundamental importance in complexity theory and computer science in general. If anything, the P/NP could be toned down, and the reader directed to the other page. But thanks for working on this. It could really help from your input. Scottcraig 17:13, 30 June 2007 (UTC)
 * Then perhaps computational complexity theory should be merged into run-time analysis, and the material on boolean decidability and P/NP-completeness in comp. complexity should be merged into Boolean satisfiability problem and Complexity classes P and NP respectively? Groupthink 21:42, 30 June 2007 (UTC)

Asymptotic complexity
Currently, asymptotic complexity redirects here. This seems wrong to me - it ought to redirect to a page more like big O notation that actually explains the concept of asymptotic complexity in a much more specific setting. But I'm not sure just where to point it to. What do you think? Dcoetzee 03:53, 7 July 2007 (UTC)
 * Either asymptotic analysis or run-time analysis? Groupthink 04:25, 7 July 2007 (UTC)

Notable Researchers
This section of the article has been rubbing me the wrong way since the first time I saw this article. I don't see it's relevance, I don't believe that it fits in with Wikipedia's WP:LIST guideline, and the criteria for inclusion of a researcher in that list is uncertain (for one: why is Turing not on the list?) Perhaps it is time to split it out into a new article (List of researches in computational complexity theory) or remove it altogether, leaving only the most vital people relevant to the field (such as Cook, Turing and Knuth) in the See also section.--Konstable 11:05, 7 July 2007 (UTC)

Question about super-exponential growth
I was wondering something, just out of curiosity. (I apologize if this is an in appropriate use of this talk page.) I was reading about hyperbolic growth, and I was wondering if any algorithm has been discovered whose run time grows hyperbolically. In other words, the runtime would be something like $$\mathrm{O}\left (\frac{1}{b-n}\right )$$, where b is a positive integer, and n is the length of the input. This would mean that, when n=b, the problem requires an "infinite" number of steps, and is therefore undecidable. But for n < b, the problem would be decidable in a finite number of steps. Actually, now that I think about it, this is an inappropriate use of big-O notation, because big-O notation, in complexity theory, is defined in terms of limits at infinity. But still, are there any problems that are decidable for small inputs, but undecidable for sufficiently large inputs? Navigatr85 21:33, 7 July 2007 (UTC)
 * Here's the problem with trying to define something as $$\mathrm{O}\left (\frac{1}{b-n}\right )$$ – your Big-O upper-bound has to be monotonically increasing (I think that's the case but I'm not certain) apply beyond a certain point where n > n0 .  If you argue that n0 is less than b, then you get a big old discontinuity when n = b and your function no longer has the required ordering property is no longer an upper-bound.  If you argue that n0 is greater than b, then you have a valid ordering property upper bound, but for negative run-times which would be undefined.


 * However, not every algorithm is boundable with a Big-O limit (see run-time analysis) (I might be wrong about that, too), and there are undecidable algorithms (see halting problem). Unfortunately, I do not know if there are algorithms that are decidable for small inputs but undecidable for large inputs – but I can tell you that there are algorithms whose run-times grow extraordinarily rapidly as n increases, such that the fastest supercomputers in existence would take until the end of the universe to solve problems for sufficiently large inputs (see Ackermann function and busy beaver for examples).  See intractability for more info. Groupthink 00:03, 8 July 2007 (UTC)


 * I have in fact heard of problems that are undecidable for sufficiently large values of the parameter, but decidable for smaller values. Just do a Google search for the phrase "becomes undecidable". My personal favourite is the Post correspondence problem: if the number of tiles is at most 2, it is decidable. If the number of tiles is at least 7, it is undecidable. For values between 2 and 7, we don't know if it's decidable or not. Dcoetzee 20:24, 18 December 2007 (UTC)

History
This page could use some history - in particular it ought to reference Juris Hartmanis and Richard Stearns' "On the Computational Complexity of Algorithms". It already cites a reference with some history. I can help with this. Dcoetzee 03:02, 18 August 2007 (UTC)

What machine model?
What kind of machine is used to measure the number of steps or amount of memory required to solve a problem? A RASP machine? I don't think it's a Turing machine. --Doradus (talk) 05:42, 18 December 2007 (UTC)
 * The definitions of major classes are explicitly designed to be robust against "reasonable" changes in the definition of the underlying abstract machine. In typical presentations various types of Turing machines are used but it doesn't matter - RAM models, cache-oblivious models, pretty much everything works fine. That's why we discuss P and not, say, DTIME(n3). Dcoetzee 20:20, 18 December 2007 (UTC)

Efficiency and Scalability
I just changed the introduction back to emphasize scalability as the major component of complexity theory. Many say that this is the same as "efficiency", and the meaning of that term is becoming to mean the same in this field, but in lay terms they are still different.

For example, on small lists, Bubble Sort is more efficient than Hash Sort. But to a complexity theorist, Hash Sort is better, and he might say more "efficient." But he really means that it is more efficient on larger lists, in other words, more scalable.

Also, introductions are supposed to relate the topic to the larger world. So it is important to retain the paragraph about real world implications of the theory.Scottcraig (talk) 18:54, 14 January 2008 (UTC)


 * IINM, that's not quite accurate: Complexity is a measure of the increase in resource consumption (usually time, sometimes memory or disk space) as problem size (usually measured as size of input) increases, whereas scalability is a benchmark of how well a given system copes with increased throughput or load. Groupthink (talk) 20:22, 14 January 2008 (UTC)


 * I'm not sure what you're getting at here. To me, the two things you describe above are the similar: as the input (load) increases, the solution (system) consumes more resources. Are you suggesting the word scalability only applies to a restricted set of processes?


 * To be accurate, scalability refers to the behaviour of a particular solution (algorithm) to a problem. Complexity of the problem refers to the scalability of the best possible solution, even without knowing what the solution is.


 * I think what I wrote is accurate, and definitely more correct than speaking of efficiency of algorithms, but it can always be improved, of course. Scottcraig (talk) 21:05, 14 January 2008 (UTC)
 * What I'm "getting at" is that complexity is a theoretical limit, whereas scalability is a practical metric. They're similar but not identical concepts, and shouldn't be treated as if they were the same thing.  Servers have scalability, but algorithms have complexity.  Complexity describes a rate of increase in resource consumption, whereas scalability describes a system's capacity to cope with that increase.  I appreciate your interest in the topic, but I don't think your rewrite takes the right approach, and if and when I have some spare time, I'm going to take a crack at editing this article myself. Groupthink (talk) 21:57, 14 January 2008 (UTC)


 * The old definition explained what complexity was; the new def replaces it with another term, scalability. I don't find that more helpful. I also believe it is not the terminology used in the field, where efficiency is generally used and defined mathematically. So I am restoring the old def.--agr (talk) 22:01, 14 January 2008 (UTC)


 * Well, I guess I'm overruled. I have always learned that problems have complexity, and algorithms have scalability. But you can interchange them if you want. And if you want to introduce a new definition for "efficiency", go ahead. Scottcraig (talk) 23:44, 14 January 2008 (UTC)
 * Please correct me if I'm wrong, but I get the sense that you're approaching this from an informal IT angle rather than a formal CS angle. I also get the sense that you're confusing the notions of "scale" and "scalability".  While it is true that an algorithm's run-time (or space) will "scale up" as n increases (except of course for an O(1) algorithm) that's not the same thing as scalability.  You should also realize that using complexity-notation to define efficiency is hardly a "new definition" – it's been around since the 1950's. Groupthink (talk) 00:06, 15 January 2008 (UTC)
 * No, I am not trying to use IT or theoretic idiomatic language. I am trying to express the ideas in language that most people use, as that is what I see as the purpose of an encyclopedia. So I don't accept the narrow definition of scalability that applies only to computer servers.


 * I think I see you agreeing that O(n) terminology is discussing how things "scale", although I wouldn't use quotation marks here. This is the normal use of the word. Rather, quotes are more suited to "scalability" in the IT world, as it is a more restrictive definition.


 * The usual definition of efficiency in scientific terminology is a measure of how a process performs as a percentage of a theoretical maximum. For example, a electrical generator can have a theoretical efficiency of say 65%, and perhaps a realized efficiency of 45%.


 * So by the very nature of the big O terminology, I would argue that complexity is a measure of how an algorithm's requirements grow as the input grows, i.e., how it scales. This idea is how someone new to the field would understand it. On the other hand, big O notation deliberately ignores multiplying factors, which relates to efficiency. An algorithm can run 100 times slower than another, and hence have 1% of the efficiency, and still have the same complexity. If you have citations where the term "efficiency" is used, then I am sure they meant some looser meaning of the term, but that does not mean it belongs in an encyclopedia. Scottcraig (talk) 00:29, 15 January 2008 (UTC)
 * The meaning of given terminology can vary depending on context, and with all due respect, you do not have a good grasp of the proper meaning of the terminology in question in the context of this subject, and your rewrite is poor. Before reverting or rewriting again, please do some reading about this material and get a better grasp on technical definitions before barging in here and erroneously rewriting this article. Groupthink (talk) 06:19, 15 January 2008 (UTC)
 * Thank you for your opinion. Your statement of it and your immediate reversion demonstrates your bad faith, so I no longer need to assume it. I take offense to your suggestion that I have been teaching my classes wrongly. The introduction in its present form is poorly worded and incorrect. The version I tried to insert is substantially the same as had been on this page for a long time, until last October. As well, you continue to remove any justification for the page, as is suggested in the guidelines. Your continued vandalism does not serve Wikipedia well. Obviously, you are a revert troll, and it is pointless to try to improve this page. Scottcraig (talk) 06:31, 15 January 2008 (UTC)
 * Let's please not fight. Neither of you is a troll and you're just having an honest disagreement about the best way to present the topic, and which terminology is appropriate. I think the valid point that Groupthink is making is that the specific term "scalability" isn't conventionally used in complexity literature to explain the concept, but that doesn't mean the concept doesn't apply. It doesn't help that the term "efficiency" is also used in a number of different ways. On the other hand, there is the valid point that people familiar with the IT concept of scalability may be confused or surprised at its use here - which may suggest that it's best to avoid both terms and just explain the concept in more detail. For example, "complexity studies the resource requirements needed to solve problems of different sizes", or something like that. Dcoetzee 17:52, 15 January 2008 (UTC)

Bit complexity and asymptotic complexity
Bit complexity redirects to this article, but unfortunately the article does not explain what this means. What is it? Thank you. --Abdull (talk) 19:48, 24 July 2008 (UTC)
 * Bit complexity refers to a measure of time complexity where the only operations assumed to operate in constant time are operations that act on single bits. I personally think it should be discussed in another article that compares and contrasts it with models where log-sized words are the unit. I'm not sure what such an article would be called. Dcoetzee 21:27, 24 July 2008 (UTC)
 * Thank you for your answer. I'd be great if there was an article about the topic you mentioned (because, for example, I cannot imagine what a log-sized word unit is yet). In the meantime, I stumbled upon another redirect, asymptotic complexity which defies definition. --Abdull (talk) 09:02, 25 July 2008 (UTC)
 * Good idea. For lack of a better name, I've created a new article called Context of computational complexity describing many factors that affect complexity analyses, including the bit/word complexity distinction. Dcoetzee 21:42, 26 July 2008 (UTC)
 * Thank you very much! Now, only asymptotic complexity has to be explained (please! :). --Abdull (talk) 14:47, 30 July 2008 (UTC)
 * That's just a bad redirect. :-) Asymptotic complexity is nothing but complexity metrics described using asymptotic notation. For now I redirected it to Big O notation. Dcoetzee 17:32, 31 July 2008 (UTC)
 * Asymptotic complexity now redirects here again. This article is good, but it doesn't contain the word "asymptotic" at all. The Big O notation might be a better redirect. Werediver (talk) 14:44, 9 June 2021 (UTC)
 * Now, is a redirect to Computational complexity. D.Lazard (talk) 14:58, 9 June 2021 (UTC)
 * As Context of computational complexity is a content fork of Computational complexity, and is a unplausible research term, I have redirected it to Computational complexity. Also, I have redirected to the same article, which I have edited for adding details to the definition of bit complexity. D.Lazard (talk) 18:11, 9 June 2021 (UTC)

worst-case analysis
The article does a poor job in failing to point out it is heavily biased towards worst-case analysis/worst-case complexity (and there are no articles explaining this concept). I threw in a phrase into the overview section, but someone has to make a prominent point on the issue in the article, since this is a fundamental concept. Probably a whole section about worst/case/average caes, upper/lower bounds, etc. Unfortunately I am not a man for the job. Twri (talk) 19:21, 24 February 2009 (UTC)

Also, the neutrality of this part of the article is disputable. Perhaps a revision is needed? —Preceding unsigned comment added by 98.221.87.128 (talk) 21:45, 6 April 2009 (UTC)

OK done big edit
I think this is better now but is nowhere near perfect; I hope at least it gives something for others to get hold of.

I think there are two main problems:
 * NP problem is covered too much, it has its own article. I have tidied it up with the view that much of it may be merged with that article.
 * Laziness by previous editors in adding references but not properly. Takes me forever, I have done some but there are plenty left where the article proper gives a link but not as a ref.

Hope on the whole it is an improvement.

SimonTrew (talk) 03:46, 17 March 2009 (UTC)

P-NP not solvable?
"The fact that the P − NP problem has not been solved, and indeed has been shown to be unsolvable..." Is this right, and if so, reference? —Preceding unsigned comment added by J. Holden Caulfield (talk • contribs) 17:34, 31 March 2009 (UTC)

That would have been big news - it definitely isn't right so far.

But as a related question - in the section on P=NP, why is finding proofs of theorems in pure mathematics listed as something that could be done more efficiently if P=NP? By Gödel's results, finding proofs of theorems in pure mathematics is an undecidable problem, and is therefore outside every complexity class, so I would have thought that P=NP would be basically irrelevant for it.

Easwaran (talk) 12:12, 14 September 2009 (UTC)


 * This is not what Gödel's Incompleteness Theorem is about. It just says that for a "sufficiently strong" logic+theory, there are statements that must be labeled "true" but cannot be proved within that system (but possibly can be proved via another meta-system) (Also, Gödels canonical example of a self-referential statement has a bad smell as frankly it seems to me he intermixes truth values from different levels with scant justification, but that's another topic). If you stay in First-Order Logic + Some Theory (e.g. Arithmetic), it intuitively seems correct to say that if "problems that are easy to check are easy to solve" (i.e. P=NP) then "finding a proof" of a provable statement should be about as easy as "checking a proof" of a provable statement (which is known easy). This needs to be made more precise of course. There is a T101 in your kitchen (talk) 21:22, 8 October 2016 (UTC)

Graph Theory
The section on graph theory is little more than a link to the graph theory page. The text inside doesn;t really referenc graph theory that much, or at least it doesn't explain the connection in graph theoretic terms. Perhaps it could be expanded, or removed? Any thoughts? 86.145.23.98 (talk) 13:57, 9 May 2009 (UTC)
 * I agree. Actually, the whole article is in bad shape. It probably has to be re-written entirely to give it a coherent structure... Pichpich (talk) 17:44, 23 May 2009 (UTC)

New structure?
How should we re-organise this article? Should we divide it (more explicitly) in two parts: (1) computational complexity of problems and (2) analysis of algorithms? — Miym (talk) 23:56, 5 June 2009 (UTC)
 * To my knowledge, computational complexity theory has (almost) nothing to do with algorithm design and analysis.
 * Computational complexity focusses on classifying computational problems into classes according to shared aspects of their computational complexity. In doing that, the objects of study are mostly these classes of problems (complexity classes), rather than specific problems. Often also some aspects of particular problems are studied, such as graph isomorphism, the SAT problem, or undirected st-connectivity.


 * For the running time of concrete computational problems, complexity theory often provides a very coarse answer: For instance, the 2-satisfiability problem is NL-complete and can thus be solved in time $$O(n^c)$$ for some fixed number c. The algorithm witnessing such a bound is often rather straightforward and would be considered trivial or even worthless from the viewpoint of algorithm design. Computational complexity does not care about the concrete value of the number c. The message of this is a structural one: Although the 2-satisfiability problem, a problem in logic, looks different from the ST-connectivity problem in graph theory, its internal structure is extremely similar to the latter. Indeed, all NL-complete problems are mutually extremely similar in structure.


 * Algorithm design and analysis of algorithms focuses on providing efficient solutions to computational problems. Algorithm design almost always focuses on a particular problem; and running time analysis of algorithms is even more special: it is devoted to understanding a particular solution to a particular computational problem.


 * For concrete computational problems, Algorithm design often has a very detailed answer: On a random access machine, the 2-satisfiability problem can be solved in time O(n) via breadth-first search. On a Turing machine (a different machine model), the resource requirements of this algorithm would be analyzed again, possibly leading to a different result, maybe $$O(n^2)$$. In algorithm design, we have to agree on a machine model, whereas the results from complexity theory are independent of the machine model.


 * There are small overlaps between the two fields. Most notably, the existence of a particular polynomial-time algorithm for the SAT problem would be a solution to the P versus NP problem. But, whereas already an algorithm running in time $$O(2^\sqrt{n})$$ on a random access machine would undoubtedly be considered as a breakthrough in algorithm design, it would not have any immediate consequence for computational complexity theory. In my personal experience, this viewpoint appears to be shared by most complexity theorists and algorithm research folks. Compare the recent books by Arora and Safra: "Computational Complexity: A Modern Approach" (draft available online) and by Goldreich: "Computational Complexity: A conceptual perspective" (drafts of some chapters available online). Further evidence is given by the following: The call for papers of the conference on computational complexity does nowhere contain the words "algorithm", "running time" or so. Compare with the topics of the European Symposium on Algorithms, a major conference on design and analysis of algorithms. There almost none of topics has to do with computational complexity.


 * Therefore, I find it necessary to have a separate article devoted to algorithm design, and to take out most of the material on algorithm design. To avoid confusion from its very origins, I also advocate to reserve the word complexity to the complexity of computational problems, and to use running time or resource consumption when talking about algorithms, but this would be a topic for another day. Beware that I have provided multiple reliable sources to back up this opinion. Please, if you have a different standpoint, justify your claims. Hermel (talk) 13:01, 15 June 2009 (UTC)


 * Removing everything related to algorithm analysis sounds fine to me. We already have the article Analysis of algorithms; we can simply refer to it. — Miym (talk) 13:41, 15 June 2009 (UTC)
 * I just did that and started translating portions of the German article. This had been previoulsy suggested at this talk page. My German is certainly not perfect, and I did not translate everything literally. Hermel (talk) 12:17, 21 September 2009 (UTC)
 * Good work. The article already looks better. --Robin (talk) 12:57, 21 September 2009 (UTC)

Needs some kind of note/cross-reference to Kolmogorov Complexity, and other remarks
Kolmogorov Complexity (length of generating program for given data), rather than run time, is what many if not most people would mean by 'Computational Complexity', and this article should refer to it early on with a reference to its Wikipedia article. Other than that, I'm not sure that the material covered in the article is actually a 'subject' per se. P = NP is a legitimate subject: so are algorithmic design, cryptography, etc. The mere design of a notation about how long something takes to run, isn't much of a subject. The little o and big O notations, for instance, have many other uses in math unrelated to algorithm runnning time: I hope they have a separate article. —Preceding unsigned comment added by 63.229.11.118 (talk) 04:55, 18 June 2009 (UTC)
 * I strongly disagree. As I said a few lines above: Compare the recent books by Arora and Safra: "Computational Complexity: A Modern Approach" (draft available online) and by Goldreich: "Computational Complexity: A conceptual perspective" (drafts of some chapters available online). It IS indeed a subject on its own, and is as such different from Kolmogorov complexity. Hermel (talk) 18:19, 21 June 2009 (UTC)
 * Yes, I also disagree that "many or most people" would think computational complexity is Kolmogorov complexity. --Doradus (talk) 20:00, 14 October 2009 (UTC)

List of complexity theorists
I removed this text from the article:

The field was subsequently expanded by many researchers, including: <-- This is NOT a "Hall of Fame". All people with wikipedia articles are notable. Please add only names which are described as "groundlayers" by the peers, with citations and summary of major contributions -->


 * Manuel Blum
 * Allan Borodin
 * Stephen Cook
 * Michael Fellows
 * Michael R. Garey
 * Oded Goldreich
 * Juris Hartmanis
 * David S. Johnson
 * Richard Karp
 * Marek Karpinski
 * Donald Knuth
 * Leonid Levin
 * Christos H. Papadimitriou
 * Alexander Razborov
 * Richard Stearns
 * Leslie Valiant
 * Andrew Yao

We need to establish some criteria for inclusion, or not have such a list at all. I haven't seen such lists on other articles either. The article on physics doesn't have a list of notable physicists like Newton, Einstein, etc. --Robin (talk) 18:26, 14 October 2009 (UTC)

Image relating complexity theory and other fields
This image shows the relationship between computability theory, complexity theory and formal language theory. I had originally put it in the lead section, but it takes too much space and makes the lead look bad. Any suggestions on where this might go (including new sections which haven't been written yet)? --Robin (talk) 13:13, 26 October 2009 (UTC)
 * Could we perhaps have a hyperlinked version of the image in the "See also" section (so that we would have an illustrated "see also" section instead of the usual boring list of links)? Or perhaps we could replace the navigation box ComplexityClasses with this (at the very bottom of the page)? — Miym (talk) 00:12, 4 November 2009 (UTC)
 * I would prefer putting it in the "see also" section. Another idea I had was to put this in the history section, with a subsection devoted to how complexity theory is historically related to computability theory and formal language theory. --Robin (talk) 01:13, 4 November 2009 (UTC)



Improving this article
Someone watching this article might have noticed that I've been editing it for the last few weeks trying to improve it. I'd really like to hear suggestions / comments / criticisms to improve it further. Of course, you can be WP:BOLD and improve the article yourself, but if you don't have the time or inclination to do so, I would appreciate any comments you have to improve the article. Any topics you feel that must be mentioned in the article and haven't been covered adequately? Any topics covered too much in detail? --Robin (talk) 22:45, 3 November 2009 (UTC)


 * One of my concerns is the illustration in Computational complexity theory. There seems to be some serious confusion regarding how to read the plot and what it means. Could we simply remove it, as well as all discussion that refers to it? — Miym (talk) 23:42, 3 November 2009 (UTC)


 * Yes, sure. I'm not a fan of that myself. The whole section on intractability is a bit sketchy, but I'm not too sure how to present it properly. It feels like a "criticisms of complexity theory" section, but presented badly. There might be interesting things to mention in such a section, like the fact that SAT solvers in reality to do handle largish instances (10000+ variables) in practice, and that even though problems might be hard to solve exactly, they might be easy to approximate. I'll see if I can get more information on this and write some stuff up. --Robin (talk) 23:56, 3 November 2009 (UTC)

Table or list?
Which one of the following seems better for presenting information?

OR a list, like this:
 * DTIME(f(n)): Decision problems solvable by a deterministic Turing machine within time f(n).
 * P: Decision problems solvable by a deterministic Turing machine within time polynomial in the input size.
 * EXPTIME: Decision problems solvable by a deterministic Turing machine within time exponential in the input size.
 * NTIME(f(n)): Decision problems solvable by a nondeterministic Turing machine within time f(n).
 * NP: Decision problems solvable by a nondeterministic Turing machine within time polynomial in the input size.
 * NEXPTIME: Decision problems solvable by a nondeterministic Turing machine within time exponential in the input size.

Robin (talk) 15:57, 19 November 2009 (UTC)

The table looks better, but perhaps we could have fewer columns:

— Miym (talk) 13:33, 21 November 2009 (UTC)


 * Alright, agreed. Maybe if we end up listing only decision problems, that column can be taken out too. --Robin (talk) 14:33, 21 November 2009 (UTC)

By the way, while "polynomial time" is fairly obvious, "exponential time" isn't. If someone new to the topic had to guess what it means, it might be easy to think something like E (complexity) instead of something like EXPTIME. To add to the confusion, we have a (somewhat strange) article Exponential time that strengthens the false impression. Hence we might consider something more explicit like this:

— Miym (talk) 15:30, 21 November 2009 (UTC)
 * OK, sounds reasonable. --Robin (talk) 15:38, 21 November 2009 (UTC)

One says
It is not good form to use the phrase, "one tries to keep"

"Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently."

Better might be,

"When considering complexity the problem should be presented in an abstract form, so that each concrete representation may be transformed into the same problem." — Preceding unsigned comment added by Thepigdog (talk • contribs) 08:06, 12 October 2011 (UTC)

Supertasks
The supertask article has a note at the top stating "For the computer science term, see Computational complexity theory." Is that note accurate? Taken literally, it is not, as the term supertask cannot actually be found anywhere in this article. But I ask here because readers of this talk page may know if that characterization is somehow appropriate. Regards, Orange Suede Sofa  (talk) 07:11, 23 October 2012 (UTC)

Computational complexity
Upper_and_lower_bounds_on_the_complexity_of_problems, begin with stating that "one is interested in proving upper and lower bounds on the minimum amount of time required by the most efficient algorithm solving a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity" Shouldn't it be the maximum amount of time?--Natematic (talk) 17:55, 18 November 2012 (UTC)


 * The complexity is the minimum time required, the running time of the most efficient algorithm. It is possible to spend as much time as you like on solving a problem, so the maximum time isn't a useful concept -- although it can be thought-provoking, see Bogosort for example.  The worst case is the maximum of those minimum times, in other words the greatest length of time you can be compelled to spend even doing it as quickly as possible.  Deltahedron (talk) 18:26, 18 November 2012 (UTC)


 * I agree with you. My point is: if you say "one is interested in proving upper and lower bounds on the minimum amount of time required by the most efficient algorithm solving a given problem", I get that considering the most efficient algorithm, you say that the complexity is the time it spend in its best-case. In other words, it seems to me that the best-fitting formalization for that proposition above is $$\min_{x\in X} \{time(T(x))\}$$ where $$X$$ is the set of possible input and $$T$$ is the most efficient algorithm. — Preceding unsigned comment added by Natematic (talk • contribs) 19:25, 18 November 2012 (UTC)

Dodgy Map
Why does the map showing the shortest route to visit 15 german cities, only actually visit 14 of them ?Eregli bob (talk) 11:19, 11 May 2013 (UTC)
 * You're right. Other sources also claim that this route covers 15 cities. For example, see . --Robin (talk) 19:14, 14 May 2013 (UTC)
 * The answer to the mystery was one mouse click away. Instead of doing original research :-) The image description says that Dortmund is not labelled in the map (I guess because, as you can see it yourself, there is simply no place for it on the map, despite (or because of) being the largest city in Ruhr area :-) Staszek Lem (talk) 02:16, 12 May 2014 (UTC)

Reversal
An editor is reversing "deterministic" and "non-deterministic" in the following sentence: "Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP." As it would make no sense that way, could someone please try to explain to why it makes no sense? I don't think I can do it. — Arthur Rubin (talk) 05:02, 10 May 2014 (UTC)


 * Hello there! Well, let's face it, you haven't even tried to explain it through, while at the same time I'm far away from being a clueless monkey which "needs to learn to read" as .  Of course, I can be wrong and there shouldn't be a big problem with that if we spend some time and energy discussing the issue –  nobody knows everything, if you agree.
 * Now, back to the subject after I've thought about it again. By definition, concept $A$ is a special case or specialization of concept $B$ precisely if and only if every instance of concept $A$ is also an instance of concept $B$, but there are instances of concept $B$ which are not instances of concept $A$.  Non-deterministic Turing machines (NTMs) can be restricted so they work as deterministic Turing machines (DTMs), while an ordinary DTM (not its multi-tape variations or anything else) can't work as an NTM.  Thus, by the definition, NTMs are a special case of DTMs –  and that's the way  the article.
 * Thoughts? If I'm wrong, please point out where's that the case, so the monkey can learn to read better. :) &mdash; Dsimic (talk | contribs) 09:10, 10 May 2014 (UTC)
 * You are misquoting the book you cite. It says that a NTM M can be restricted to a DTM M1 (by choosing one specific successor state) but it does not say that M1 computes the same function as M (which is how I understand your "works as").  Using the same search terms as you did, here we find "every DTM, by definition, is a special NTM" and here "an NTM can be simulated by a DTM, i.e. every NTM has an equivalent DTM".  I would say that an NTM is a TM in which at each point there is a number of successor states, and a DTM is a TM in which at each point there is one or none.  So every DTM is an NTM, but some NTMs are not DTMs, namely those NTMs in which some states have more than one successor.  The concept of DTM is a restriction of the concept of NTM, and the class of DTMs is a subset of the class of NTMs.  The statement that any given NTM may be "restricted" to a DTM is not the same as saying that the concept of NTM is a restriction of the concept of DTM.  Deltahedron (talk) 09:23, 10 May 2014 (UTC)
 * Well, I admit, I was wrong. Thank you for the explanation, and I'll use it as a motivation to learn to read better. &mdash; Dsimic (talk | contribs) 09:50, 10 May 2014 (UTC)

Difference between "decision problems" and "function problems"
We read

"It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples (a, b, c) such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers."

This does not sound correct at all. Consider the predicate "pfactors([f0...fn],composite)", which is TRUE if [f0...fn] are the prime factors of the composite. Evidently, the decision problem "pfactors([f0...fn],composite)" is MUCH easier than the function problem "pfactors([F0...Fn],composite)" where F0 ... Fn are variables to be determined and even n is unknown. Additionally, in the given example, there may be ways to check whether a x b = c holds much more easily than actually computing a x b (we just need to "fail fast" after all). Additionally, the text says "function problems can be recast as decision problems" and then proceeds to given an example where a decision problem is recast as a function problem...

See also for example FNP vs. NP : https://complexityzoo.uwaterloo.ca/Complexity_Zoo:F#fnp (which I don't quite understand yet) There is a T101 in your kitchen (talk) 21:46, 8 October 2016 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Computational complexity theory. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20101212035424/http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf to http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 13:50, 14 January 2018 (UTC)

Input size
I will undo the change 03:53, 19 May 2018‎ of 69.172.150.202. The user removed the bold text from the following paragraph.

"Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. This is usually taken to be the size of the input in bits."

So the text was changed to

"Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the input."

The user added the followind comment to justify the edit:

"Measuring the size of the instance "in bits" is a misleading and unfounded generalization. The same paragraph goes on to give an example where the size is measured on the number of vertices (not the number of bits needed to represent the number of vertices)."

The original text was brought to the attention of the user by me in a discussion of a post at codereview.stackexchange.com. Ihe user than told me:

"Nobody thinks that way. That was a misleading and uncited statement on Wikipedia. Now it doesn't say that anymore." (https://codereview.stackexchange.com/questions/194611/hackerrank-challenge-apple-and-orange#comment374709_194617) and changed this wiki text as described above. The reason given for the change is nonsense. James Aspnes says in Notes on Computational Complexity Theory, CPSC 468/568: Spring 2017, page 7 "The time complexity of an execution is the number of steps until the machine halts. Typically we will try to bound the time complexity as a function of the size n of the input, defined as the number of cells occupied by the input, excluding the infinite number of blanks that surround it."

So the size of the input depends on the alphabet used to encode the input of the turing machine but is proportional to the number of bits of the input numbers

Also the Stanford Encyclopedia of Philosophy says "For each problem X, it is also assumed that an appropriate notion of problem size is defined for its instances. Formally, this is a function &#124;⋅&#124;:X→N chosen so that the efficiency of a decision algorithm for X will varying uniformly in &#124;x&#124;. As we have seen, if X⊆N, it is standard to take &#124;n&#124;&#61;log2(n) i.e. the number of digits (or length) of the binary numeral ┌n┐ representing n"

These are two citations for the original statement. Miracle173 (talk) 22:01, 21 May 2018 (UTC)


 * I agree with your addition. Input size is conventionally measured by the number of bits in the input. --Robin (talk) 06:29, 22 May 2018 (UTC)

The quantitative answer?
Under Computational problems -> Problem instances, it says quantitative answer, but all I see is a yes/no answer, where is the quantity in the answer? I think quantitative should be removed, what do you guys think? — Preceding unsigned comment added by Shalevku (talk • contribs) 16:31, 9 January 2020 (UTC)