Wikipedia:Reference desk/Archives/Mathematics/2011 February 11

= February 11 =

"One to one" and "onto"
What is meant by the terms "one to one" and "onto" with regards to linear transformations? Widener (talk) 03:44, 11 February 2011 (UTC)
 * See injection and surjection (actually I won't be surprised if those redirect to a single article). There is no special meaning as they relate to linear transformations; it's the same meaning as for functions in general.
 * That's not to say that there aren't some implications that follow from injectivity/surjectivity in the context of linear transformations and not in other contexts &mdash; there certainly are those. For example, an injective linear transformation has trivial kernel; the matrix of a surjective linear transformation between finite-dimensional vector spaces has full rank. --Trovatore (talk) 03:52, 11 February 2011 (UTC)
 * Ah, the article for injections is actually at injective function. Apparently the word injection has some other meanings, none of them probably very important :-/ --Trovatore (talk) 03:53, 11 February 2011 (UTC)
 * I assume by "full rank" you mean all the columns are linearly independent. What do you mean by "trivial" kernel? Widener (talk) 04:01, 11 February 2011 (UTC)
 * Well, the rows or the columns, whichever there's fewer of. The kernel of a linear transformation is the set of all vectors that get sent to the zero vector, and they form a subspace of the domain.  "Trivial" means that that subspace consists only of the zero vector. --Trovatore (talk) 04:25, 11 February 2011 (UTC)
 * OK Thanks! Widener (talk) 04:31, 11 February 2011 (UTC)
 * If the map is surjective, needn't there be at least as many columns as rows? So then the condition becomes that the rows are linearly independent.  Eric.  82.139.80.236 (talk) 15:22, 16 February 2011 (UTC)
 * You're probably right (well, more than "probably"; I just can't be bothered to check at the moment). But that's not what "full rank" means in general. --Trovatore (talk) 07:41, 18 February 2011 (UTC)

Solving limits
$$ \lim_{h \to 0}\frac{f(h+1)}{h} = 5 $$

Show that $$f(1)=0$$. How do you do this? --Widener (talk) 04:57, 11 February 2011 (UTC)
 * This looks like homework. Look at the top of the page for our procedures on that.  In a nutshell, you need to show what progress you've made towards a solution. --Trovatore (talk) 05:01, 11 February 2011 (UTC)
 * It's a past exam question actually, and furthermore, it's a past exam to a course I haven't started yet. Perhaps you could give me a reference to a page that teaches you how to solve limits like these in general terms. Widener (talk) 05:05, 11 February 2011 (UTC)
 * Oh and by the way, I don't a clue how to solve it. Widener (talk) 05:08, 11 February 2011 (UTC)
 * One hint I can give you is that the claim is not true as stated &mdash; you need a further condition on $$f\,\!$$. Can you figure out or guess what condition might be needed?  If you can, what's the definition of that condition, and how might you apply it? --Trovatore (talk) 05:09, 11 February 2011 (UTC)
 * The question does mention that it is differentiable at x=1. Oops. Anyway, $$f'(1) = \lim_{h \to 0}\frac{f(h+1)-f(1)}{h}=\lim_{h \to 0} 5-\frac{f(1)}{h}$$ and let me guess, $$\lim_{h \to 0}5-\frac{f(1)}{h}$$ is finite only when $$f(1)=0$$. Something like that? Widener (talk) 05:21, 11 February 2011 (UTC)
 * That's the general idea, though I think you need to work on the details a bit to make everything justified. Also, that f is differentiable at 1 is more than we need - it would follow if we just knew that it is continuous there.
 * By the way, you're not supposed to remove your questions. -- Meni Rosenfeld (talk) 05:54, 11 February 2011 (UTC)
 * OK Thanks Trovatore and Meni. Widener (talk) 06:06, 11 February 2011 (UTC)

"Solving" is the wrong word here. You're solving a problem, and you're evalutating a limit. Michael Hardy (talk) 04:42, 12 February 2011 (UTC)

Ballistics
I am looking for some literature related to the math behind the ballistics of guns:equations of motions of bullets etc. Can someone suggest some good online or offline resources? Thanks-Shahab (talk) 05:24, 11 February 2011 (UTC)


 * An interesting question on this topic was asked in September 2010 by User:Anonymous Dissident. My response proved to be unreliable, but the response from User:Bo Jacoby was very good.  It will be of assistance to you.  See the question titled Projectile motion problem HERE.  Dolphin  ( t ) 05:34, 11 February 2011 (UTC)
 * Trajectory might be a good place to start.--Salix (talk): 10:37, 11 February 2011 (UTC)

Algorithm for Time Complexity?
Is there an algorithm which, given another algorithm will work out it's time complexity (or its space complexity)? If not is this just because no one's thought of one, or are there theoretical reasons this can't happen? — Preceding unsigned comment added by SlakaJ (talk • contribs) 17:39, 11 February 2011 (UTC)
 * As we can't demonstrate whether another algorithm will halt or not, we certainly can't determine its complexity (in all cases). --Tardis (talk) 18:01, 11 February 2011 (UTC)
 * To play the devils advocate: what if we admit a halting oracle (or even an oracle for some time complexity class)? Sławomir Biały  (talk) 18:17, 11 February 2011 (UTC)
 * You are making a big assumption, so why not make others... Assume that any possible input function will halt. Assume that time complexity will be in the form a lg(n) b nc. For a and b, you have 0 or 1 to include/exclude log or power. Then, this is not complicated. Apply the input function to a data set of size 1, then 10, then 100, then 1000, then 10000... until you can do a best fit for values of a, b, and c. It is a rather simplified form of a best fit algorithm for random (x,y) values where we have the size of input as x and the time required to run as y. -- k a i n a w &trade; 19:38, 11 February 2011 (UTC)
 * Well, this suggestion fails even for sorting algorithms (eg the input data might already be sorted). What I had in mind, maybe, was supposing that an algorithm is known a priori to have polynomial time algorithm, is there a decision procedure to determine the time complexity?  My feeling is that the answer is still "no", and that there should be a theorem to that effect.   Sławomir Biały  (talk) 19:51, 11 February 2011 (UTC)


 * By clarifying the data, you are altering the definition of time complexity. Time complexity is independent of the data. It is based on the size of the data. The n used in the definition is a size, not a type. You don't have a "sorted n" and an "unsorted n". You simply have n. If you don't want to test the algorithm on actual data, you can perform a step analysis. Unroll all of the loops (even recursive calls can be considered loops) into a non-looping algorithm. You will be left with the number of steps required to complete the algorithm. Once again, you must do this with repeated values for n because some loops depend on the size of n to determine how many times the loop is processed. If you then assume each step is processed in a fixed amount of time, you can estimate the amount of time required for the algorithm. That is, pretty much, how humans do it. You eyeball the loops and guess at a time complexity. A loop by itself is n. A loop in a loop is n2. -- k a i n a w &trade; 19:58, 11 February 2011 (UTC)
 * Please see time complexity. The time complexity of an algorithm is usually taken to mean its worst-case behavior on an input of a given size.  Thus a smoothsort has time complexity O(n log n) even though it runs as O(n) on sorted data.   Thus, if in the "solution" you proposed above, I were unlucky enough to give sorted data to the algorithm for various values of n, then I would (incorrectly) reckon a time complexity of O(n).  The definition of time complexity requires knowing the behavior of the algorithm for all possible inputs of a given size.   Sławomir Biały  (talk) 20:13, 11 February 2011 (UTC)


 * I've read the article and I know what smoothsort is. When using the solution I suggested on smoothsort, I set tests-per-n to 100 to avoid odd cases of sorted data. I set initial n at 100 and incremented to 1000000. After plotting average time per n, the best fit was 0.92 lg(n) n1.03. Rounded to the nearest integers, that is lg(n)n or n log n. So, your theory is that this method will fail miserably. When implemented, it gets a rather accurate answer. Further, I can do it on any sorting algorithm. Bubble sort returns 0.02 lg(n) n1.93. That is pretty much n2 if I'm not mistaken. I suppose that is, in theory, completely off also. -- k a i n a w &trade; 16:59, 12 February 2011 (UTC)
 * What you get by fitting to experimental run times is (at best?) the average behavior. If you try it with quicksort you will very likely get an nlogn-like curve, too -- but the actual worst-case behavior of quicksort is O(n²). –Henning Makholm (talk) 17:49, 12 February 2011 (UTC)


 * The result for quicksort is up for interpretation. It is 0.63 lg(n) n1.37. That round off to n log n, but has a rather large error when compared to other algorithms. My point is that this isn't thrown off because one time the computer generated a strange set of data (as the questioner complained about). Further, the questioner is not interested in any way about the worst or best case. The user wants the average time complexity "for all possible inputs of a given size". -- k a i n a w &trade; 17:57, 12 February 2011 (UTC)
 * Out of curiosity, who exactly was asking about empirical average behavior? It certainly wasn't me, since I've corrected you several times on this point.  Sławomir Biały  (talk) 18:30, 12 February 2011 (UTC)
 * No, there isn't. Suppose $$f$$ were a computable function that, on an input that runs in polynomial time, outputs a $$c$$ such that the input runs in $$O(n^c)$$ time (we make no requirements on $$f$$ when the input doesn't run in polynomial time).  We're going to build a diagonal function $$D$$ which knows its own index (via the recursion theorem).  On any input of length $$n$$, $$D$$ simulates $$n$$ steps of $$f(D)$$.  If this does not produce an output within those steps, $$D$$ halts.  If this does produce an output, $$D$$ runs for another $$n^{f(D) + 1}$$ many steps, then halts.
 * If $$f(D)$$ never halts, then $$D$$ runs in linear time. Contradiction.  If $$f(D) = c$$, then $$D$$ runs in $$O(n^{c+1})$$ time.  Contradiction.  Thus there can be no such $$f$$.--203.97.79.114 (talk) 21:13, 11 February 2011 (UTC)
 * Re the question about oracles: "Algorithm A has complexity O(f(n))" means "There exists c such that for all inputs t, Runtime(A,t) <= c*(1+f(t))". That is a $$\Sigma^0_2$$ formula, one level higher in the arithmetic hierarchy than the halting problem, if I have that right.  That is, for fixed c, you can tell with a halting oracle whether the running time of an algorithm is bounded by c*f(n), but to find the right c, you have to try c=1, c=2, and so forth, which is only semi-decidable for the oracle machie.  In other words you have to run your oracle machine in a potentially infinite loop and ask whether that halts.   The "Turing machine with an oracle for the halting problem" (aka Turing machine with a $$\Pi^0_1$$ oracle) has its own halting problem, which needs a level 2 (0(2) or $$\Pi^0_2$$) oracle, and so on.  I think the answer to Sławomir's question is that you need a $$\Pi^0_2$$ oracle (and this all assumes f is a computable function, otherwise you have to go higher up in the arithmetic hierarchy or even beyond it).  71.141.88.54 (talk) 23:31, 11 February 2011 (UTC)
 * But you can't output an arbitrary function by an algorithm as it's an infinite object. The only meaningful way how to "work out" the time complexity of an algorithm I can think of in this context is that the procedure gets a description of an algorithm A and a number n, and is supposed to output either the maximum running time of A on any input of length n, or ∞ if A does not halt on some of these inputs. This can indeed be done with an oracle for the halting problem: ask the oracle whether A halts on all inputs of length n, and if it does, simulate A.—Emil J. 14:47, 16 February 2011 (UTC)
 * I didn't think the question was to determine complexity. Rather, given an algorithm A described by a Turing machine, and a computable function f described by another Turing machine, the question was whether an oracle machine could give a yes-no answer to the decision problem of whether algorithm A on input of size n always runs in time O(f(n)).  My answer was that a mere halting ($$\Pi_1^0$$) oracle can't solve that problem, but that a "super" ($$\Pi_2^0$$) halting oracle can solve it.  Does that look reasonable?  71.141.88.54 (talk) 05:07, 17 February 2011 (UTC)