Wikipedia:Reference desk/Archives/Mathematics/2017 September 14

= September 14 =

Applying the symbol "approaches the limit", ≐.
What exactly does the symbol "approaches the limit", ≐, mean? Say
 * $$\lim_{h\to 0}\frac{f(x+.5h)-f(x-.5h)}{h}=f'(x).$$

Does this mean f'(x)≐$$\frac{f(x+.5h)-f(x-.5h)}{h}$$ or maybe f'(x)=$$\frac{f(x+.5h)-f(x-.5h)}{h}$$  (h≐0)? 166.186.169.75 (talk) 01:38, 14 September 2017 (UTC)
 * The limit symbol is part of the expression on the lefthand side of the equation. Put simply, it means "as h gets close 0, the expression within the limit gets close to $$f'(x)$$".  For a precise definition, see our article on limits.--73.58.152.212 (talk) 03:10, 14 September 2017 (UTC)
 * I am fairly familiar with the concept of limits and even approaching a limit. My question is how does that particular notation fit into that derivative example?  166.186.168.71 (talk) 13:09, 14 September 2017 (UTC)


 * List of mathematical symbols says that ≐ means "is defined as; is equal by definition to". It doesn't mention anything about limits. Loraof (talk) 15:51, 14 September 2017 (UTC)
 * But if you Google ≐, the general consensus seems to be that it means "approaching the limit". 166.186.168.12 (talk) 02:18, 16 September 2017 (UTC)
 * I think in this context it means approximate equality or limiting value, see Approximation. Different authors may use the same symbol for different things, so it's difficult to say what the exact meaning is intended to be, especially since the ≐ isn't exactly standard. Note also that $$\lim_{h\to 0}\frac{f(x+.5h)-f(x-.5h)}{h}=f'(x)$$ assumes that f is differentiable at x in the first place; it's possible for the limit to exist even if the derivative does not. In terms of numerical analysis, the expression gives a better approximation for the derivative for 'nice' functions than the usual one. See also Symmetric derivative. --RDBury (talk) 20:25, 14 September 2017 (UTC)

From a computational perspective, how do mathematicians succeed proving theorems?
This question might seem ill-defined, but I will try my best. It's known there's a strong connection between the P=NP problem and proving mathematical theorems; verifying a suggested proof is in NP, and if P=NP (with an acceptably small polynomial), there's a fast polynomial algorithm for automatic theorem proving. Assuming P!=NP, how come human mathematicians succeed in proving so intricate theorems, without resorting to using brute-force to scan all possible strings that are hypothetical proofs? Thanks, 77.127.95.225 (talk) 06:46, 14 September 2017 (UTC)
 * The literature is sometimes confusing on this point, but P and NP apply to classes of problems rather than particular problems, so individual problems in the class may be easy even if the class as a whole is difficult. The difficulty of a class generally refers to the worst cases, so even if a particular class is considered NP-hard, it may be that a randomly selected instance is solvable in polynomial time in the majority of cases. That being said, there are many different levels of complexity other than P and NP, and NP describes the complexity of solving problems in Boolean logic (see Boolean satisfiability problem), which is a very small part of mathematics as a whole. The class of problems which consist proving general mathematical theorems is undecidable (see Undecidable problem), which means there instances which can't be solved under any reasonable model of computation. In contrast, problems in an NP class are solvable, even if it's impractical to solve some of them because it takes such a long time. Just as in the NP case though, just because a class as a whole is undecidable doesn't mean that all instances must be difficult, and perhaps most instances will be easily solvable even if there are the occasional instances which are impossible to solve. This explains what happens in mathematics; some statements are easy to prove or disprove, while some are difficult/impossible. Mathematics is full of unsolved problems, most of which can be formulated in the form of proving a theorem, but that doesn't mean that at least some problems can't be solved easily. --RDBury (talk) 18:32, 14 September 2017 (UTC)
 * PS. Automated theorem provers use a variety of heuristics to prove (or disprove) statements. But, as predicted by undecidability, they don't always produce a result. Most, such as Coq, are interactive so that humans can guide the process using experience and intuition while the software takes care of gritty details. --RDBury (talk) 19:03, 14 September 2017 (UTC)


 * I interpret the OP's question, as per his last sentence, as "What is the nature of human intuition that allows people to prove theorems even when the universe of possible approaches to try is huge or infinite? I think it's a fascinating question, and I'm sure I've read discussions of it that give some insights but not a whole lot. (Sometimes people ask chess experts a similar question.) Surprisingly, our article Intuition doesn't seem to help, nor does the article Artificial intuition. The article Cognition, or wikilinks therein, may be of some help. Or maybe the bibliographies of any of the articles I've linked might lead you somewhere useful. Loraof (talk) 21:20, 14 September 2017 (UTC)


 * To sort of answer the question in the last line, I don't think any computation is involved. The teacher I had for the NP stuff said that when it came out, he wondered how it was possible to do it.  But basically first the 3-SAT problem was shown to be in NP and is not known to be in P.  Then other problems can be shown to basically be equivalent to 3-SAT, or another problem in the NP=complete class, and that proves it to be NP-complete.  Bubba73 You talkin' to me? 22:30, 14 September 2017 (UTC)


 * Another relevant link here might be Problem solving, especially the sections on strategies and methods. The thing with 'interesting' questions is that they are difficult, if not impossible, to answer, at least for current science/philosophy/metaphysics. It should be noted that 'Intuition' in the context of solving a mathematical problem has somewhat different meaning than its usual one. (Plus there is intuitionistic logic which has yet another meaning.) The article defines intuition as a source of knowledge, but in mathematics this has proven to be notoriously unreliable. Nevertheless, a mathematician can often look a theorem statement and come up with the outline of a proof without the use of the problem solving techniques normally taught; turning the outline into an actual proof if often still an issue though. You might then say that the outline comes from intuition, for lack of a better term. In this context, intuition has the rather Zen-like quality that it can be learned but it's impossible to teach, since otherwise it would be a method or strategy. Perhaps the brain is really executing some internal and poorly understood algorithm. or perhaps there is some ineffable quality of the brain which can't be emulated on a computer. My personal point of view is that until you can put either alternative into a falsifiable form it's not something you can decide. I'm pretty sure the first class of problems to be shown NP-complete was SAT, aka the Boolean satisfiability problem; see Cook–Levin theorem. Proving NP is usually relatively easy, basically just show that if you have a potential solution you can verify it quickly. The proof of the C-L theorem involves emulating a Turing machine with a set of logical statements and is fairly complex. --RDBury (talk) 08:46, 15 September 2017 (UTC)

Name of a particular type of "temporal" averaging function?
I'm thinking of a function that works in the following way: an infinite list of variables is processed sequentially, and at each step the "temporal average" (TA) is the sum of the current and next value divided by two. So for example, once the values 9976, 3008, 4582, 1733, 576 have been fed into the function we have a TA of 2105.5 (whereas the classical average would have been 3975). Obviously, had the values been processed in reverse the TA would compute to 6457.0625, hence the so-called "temporal" aspect of this function. Now for some reason this seems to me to be a useful metric of sorts, though admittedly I can't really formulate why I think that is at the moment. So my question is whether there is in fact a name for this kind of function and also if are there any interesting (or more useful) variants of it? 73.232.241.1 (talk) 07:25, 14 September 2017 (UTC)


 * You are describing an exponential smoothing with smoothing factor of 1/2. Dragons flight (talk) 07:34, 14 September 2017 (UTC)


 * Excellent, thank you! 73.232.241.1 (talk) 07:48, 14 September 2017 (UTC)


 * I'm not getting your result. Can you provide a worked example similar to that here.
 * If current value starts at zero, an input of 9976 results in (0+9976)/2 = 4988
 * Then (4988+3008)/2 = 3998
 * Then (3998+4582)/2 = 4290
 * Then (4290+1733)/2 = 3011.5
 * Lastly (3011.5+576)/2 = 1793.75 which isn't your 2105.5 . Puzzled -- SGBailey (talk) 13:25, 14 September 2017 (UTC)


 * Ah found it - you don't start with a "pot" of 0, you are initializing your "pot" to the first value and starting with the second value. -- SGBailey (talk) 13:27, 14 September 2017 (UTC)


 * That set of data would probably be better described using double exponential taking account of the trend. You might be interested in Smoothing, methods like Kalman filter or Savitzky–Golay smoothing filter are also quite often used in practice and you don't have to pick smoothing factors out of the air. Dmcq (talk) 13:47, 14 September 2017 (UTC)


 * Okay, yeah the Kalman filter may be a better fit for the data I'm working with actually. I appreciate the suggestions. Cheers. 73.232.241.1 (talk) 19:10, 14 September 2017 (UTC)