Wikipedia:Reference desk/Archives/Mathematics/2011 March 9

= March 9 =

Identifying "off" values from how well data fit a reference
I'm dealing with sets of (real-valued) data points, and associated with each data point is a reference value. Aside from the existence of the reference values, the data aren't expected to follow any given distribution, or necessarily match the variance or mean of the reference values. (Theoretically there should be a simple transformation function from the data values to the reference values, but we can't specify a priori what the function should be - the only constraint that we have at the moment is that the transformation function would be monotonic and non-decreasing.) As I understand it, the way to handle such a situation is with non-parametric statistics, and the way to tell "how well" the data match the reference values (rank-order-wise) is probably by using the Kendall tau rank correlation coefficient (though I'm still a little unclear on the difference between it and the Kendall tau distance or Kendall's W, and where one is used in preference to the others). From what I can tell, given multiple data sets all sharing the same reference values, I can use this statistic to tell which data set fits the reference set "better", and even "how much better", although I'm not sure if it would be the best tool for that job.

One unresolved issue is that I wish to assign an "outlierness" value to each data point, so I can identify which data points are the most "off" (and in which direction) from where they "should be" according to the reference values. If possible, I want to weight the "offness" by how closely spaced the points are in the reference set - that is, swapping the rank-order of two adjacent points from regions where the reference values are spaced closer together would be considered less "off" than swapping two adjacent points in regions where the reference values are further apart - though I'm not sure if how critical that would be in practice. (Actually, the reason I want such a value is that I want to test if the "offness" correlates with other data values associated with the points - am I thinking about this in the wrong way?)

I appreciate any guidance as to tests/statistics/protocols I should be looking into. (Does what I said even approximate a standard statistical procedure?) Thanks. -- 174.31.215.221 (talk) 05:51, 9 March 2011 (UTC)

(x+1)^x
I was thinking after my maths exam, if there was a way to expand (x+1) to the power of x. It could be as complicated as possible, but is there a way to take away the brackets? Thanks in advance 27.32.104.185 (talk) 07:01, 9 March 2011 (UTC)
 * Other than using the binomial series, I don't think there's much you can do. -- Meni Rosenfeld (talk) 09:20, 9 March 2011 (UTC)
 * You can do this: let y=(x+1)x, then take the natural log on both sides to get ln(y) = xln(x+1). This can be useful depending on what you want to do with it, e.g. you can now take the derivative implicitly. Staecker (talk) 11:43, 9 March 2011 (UTC)
 * Or you can go straight for a Taylor series expansion round 0. Dmcq (talk) 11:47, 9 March 2011 (UTC)
 * WolframAlpha gives the following series expansions for (x+1)^x. Zunaid 13:48, 9 March 2011 (UTC)
 * I didn't understand that Wolframalpha page. (x+1)^x has a single variable. I can readily evaluate it for (almost) any given value of x. Why are they giving different taylor series expansions at different value of x? Surely what you want is a series expansion for the expression (x+1)^x. Puzzled. -- SGBailey (talk) 06:03, 12 March 2011 (UTC)
 * Have you read our Power series article? The series expansion can be centered around different values of x, namely the c in
 * $$f(x) = \sum_{n=0}^\infty a_n \left( x-c \right)^n = a_0 + a_1 (x-c)^1 + a_2 (x-c)^2 + a_3 (x-c)^3 + \cdots$$
 * You might want to view those as 0-based power series for $$(x+c+1)^{x+c}$$, but it is convenient to view them as different series for the same function. Generally the different series converge for different sets of x's, which means they are useful for different purposes. See also analytic continuation.
 * (Only one of the series presented by Wolfram Alpha is actually a power series; it seems not to understand that specifically Taylor/power series were used for. But one could find a Taylor series around every point in the interior of the function's domain). –Henning Makholm (talk) 16:32, 12 March 2011 (UTC)

Compact surface & Gaussian curvature
If a surface is compact, does it necessarily have a point with positive Gaussian curvature? How would one show/argue this, if so? I'm trying to show there are no compact minimal surfaces in R^3. Thankyou, Mathmos6 (talk) 07:12, 9 March 2011 (UTC)
 * Any genus $$g$$ (compact, orientable) surface, with $$g \geqslant 2$$, has a metric of constant negative curvature (in the same way as there is a metric on the torus of curvature constantly $$0$$). On the other hand, you won't be able to realise these metrics by an isometry (of Riemannian manifolds) which embeds it into $$\mathbb{R}^3$$: a theorem of Hilbert says that no (compact, orientable) surface with constant negative curvature can be isometrically embedded in $$\mathbb{R}^3$$. You might want to take a look at the relevant Wikipedia article. --SamTalk 08:27, 9 March 2011 (UTC)


 * You don't attend Cambridge do you? The students were given the same problem in this week's homework: see Question 5 — Fly by Night  ( talk )  18:43, 9 March 2011 (UTC)

Assuming per above that it is a homework exercise to show that there is no compact minimal surface, I will offer a hint: use harmonic functions. Sławomir Biały (talk) 02:46, 10 March 2011 (UTC)

Rounding off
Rounding off is not a problem for me at all, but what does "4840300 correct to the nearest 1000" mean? Seems a bit weird (the hundreds digit is significant!) Lanthanum-138 (talk) 13:03, 9 March 2011 (UTC)
 * It sounds weird to me too. I would read this as indicating something like "4840300 ± 1000", i.e. the actual value is between 4839300 and 4841300. Or perhaps "4840300 ± 500", to make the window of approximation equal to 1000. Staecker (talk) 13:45, 9 March 2011 (UTC)
 * OK, thanks for the explanation. (Although I still wonder why people say things like that in the first place.) Lanthanum-138 (talk) 13:58, 9 March 2011 (UTC)


 * (ec) Seems absurd to me. The prase 'correct to the nearest X' means (if I get it correctly) 'rounded to the nearest natural (or integer) multiple of X', for example 'the table is 80cm high correct to the nearest 5cm' means 'the integer multiple of 5 cm, being closest to the table's height, is 80cm (16 × 5 cm)'. So ANY number 'correct to the nearest 1000' should be a multiple of 1000, while 4840300 is not. It is, however, a multiple of 998... --CiaPan (talk) 19:03, 9 March 2011 (UTC)


 * I would take it as a straightforward invitation to round to the nearest whole thousand, i.e. to 4840000.→86.132.164.148 (talk) 18:50, 9 March 2011 (UTC)
 * I agree, 4840000 seems to be what is being looked for.
 * 137.81.47.207 (talk) 19:01, 9 March 2011 (UTC)
 * From the context of the number - which is a population - I'm not sure about that. Lanthanum-138 (talk) 08:33, 10 March 2011 (UTC)

Which number is larger?
I am trying to find out which number is largest:

9! ^ (9! ^ 9!)

OR

(9! choose 8!) ^ 9!

The best way ive had suggested is to subtract the two, that is:

[9! ^ (9! ^ 9!)] - [(9! choose 8!) ^ 9!]

This way, if its negative the "choose" expression is larger, but if its positive the exponential expression is larger. Obviously, a TI calculator wont handle this large a number... so i tried something called "apFloat", a java based calculator with indefinite precision, and the calculation of one of these expressions is possible, but the subtraction of the two takes up too much memory and it gives some error that it can not open some temporary file of that size or something.

Is there some other way to determine which number is larger? Thanks!

137.81.47.207 (talk) 18:59, 9 March 2011 (UTC)

P.S.: If anyone knows how, please take the expressions i wrote here and write them in wiki LaTeX? i would do it but i dont know how to use it yet. Thanks much!
 * Not very helpful, but Wolfram Alpha can compute the difference. Based on the answer, the first thing is bigger. Staecker (talk) 19:30, 9 March 2011 (UTC)


 * $$\binom{9!}{8!}^{9!} = \left(\frac{9!!}{\mathrm{something}>1}\right)^{9!} < 9!!^{9!} < (9!^{9!})^{9!} = 9!^{9!\times 9!} = 9!^{9!^2} < 9!^{9!^{9!}}$$, because $$2<9!$$. –Henning Makholm (talk) 20:33, 9 March 2011 (UTC)


 * (ec) Let's expand both expressions somewhat... $$(9!)^{(9!)^{9!}} = (9!)^{(9!)^{9!-1}\cdot(9!)} = \left( (9!)^{(9!)^{9!-1}} \right)^{9!}$$ and $${9! \choose 8!}^{9!} = \left(\frac {9!}{8!\cdot (9!-8!)}\right)^{9!} = \left(\frac 9{9!-8!}\right)^{9!} = \left(\frac 9{9\cdot 8!-8!}\right)^{9!} = \left(\frac 9{8\cdot 8!}\right)^{9!}$$ We have two powers with positive bases and same positive exponent, so they are in same less-than/greater-than relation as their bases,$$(9!)^{(9!)^{9!-1}}$$ and $$\frac 9{8\cdot 8!}$$ respectively... --CiaPan (talk) 20:34, 9 March 2011 (UTC)


 * CiaPan, $${9! \choose 8!}^{9!} = \left(\frac {(9!)!}{(8!)!\cdot (9!-8!)!}\right)^{9!}$$, and not $$\left(\frac {9!}{8!\cdot (9!-8!)}\right)^{9!}$$. Henning has got it right. - DSachan (talk) 21:18, 9 March 2011 (UTC)


 * Right, I can't guess where my head was when I wrote that... [[file:sad.png]] Sorry for that. --CiaPan (talk) 09:49, 14 March 2011 (UTC)


 * Computing in another way, $$\binom{9!}{8!} < 9!^{8!} < 9!^{9!}$$ but then obviously $$\bigl(9!^{9!}\bigr)^{9!} < 9!^{9!^{9!}}$$ by a very wide margin, so indeed $$\binom{9!}{8!}^{9!} < 9!^{9!^{9!}}$$. (Update: forgot to sign.) &#x2013; b_jonas 08:53, 10 March 2011