Wikipedia:Reference desk/Archives/Mathematics/2010 April 27

= April 27 =

Squaring the wavefunction #2
Why is it that:

$$|t|^2= \frac{1}{1+\frac{V_0^2\sinh^2(k_1 a)}{4E(V_0-E)}}$$

when:


 * $$t=\frac{4 k_0k_1 e^{-i a(k_0-k_1)}}{(k_0+k_1)^2-e^{2ia k_1}(k_0-k_1)^2}$$

and k1 is imaginary? How do you calculate the absolute value of t anyhow? --99.237.234.104 (talk) 00:50, 25 April 2010 (UTC)


 * EDIT: I didn't realize the two expressions use different variables. The two equations came from the rectangular potential barrier article, and k0 and k1 are defined in terms of E and V0 there.


 * I posted this question before, but didn't fully explain my question. Somebody at the science reference desk suggested I post it again, so here goes.  --99.237.234.104 (talk) 04:04, 27 April 2010 (UTC)

I think the following formula may help
 * $$|t|^2=t\bar t$$

where $$\bar t$$ is complex conjugate of t,
 * $$\bar t=\frac{4 k_0(-k_1) e^{+i a(k_0+k_1)}}{(k_0-k_1)^2-e^{2ia k_1}(k_0+k_1)^2}$$

(Igny (talk) 04:15, 27 April 2010 (UTC))


 * Why is that true? Remember that k1 is imaginary, not real.  --99.237.234.104 (talk) 20:19, 27 April 2010 (UTC)

integer solutions
Consider two natural numbers a and b, with a<b. Within the closed interval [b,b2-b] can there exist a solution within the natural numbers to the equation ax+by=bz? If not, why not? Thanks.-Shahab (talk) 05:08, 27 April 2010 (UTC)


 * Assume a=1. Then LHS ≥ 1×b+b×b = b(b+1), RHS ≤ b×(b2−b) = b2×(b−1)
 * For b=3, LHS ≥ 12 and RHS ≤ 18, so the possible solution is:
 * a=1, b=3, [b,b2−b]=[3,6], x=3, y=3, z=4,
 * which gives
 * ax+by = 1×3+3×3 = 12 = 3×4 = bz.
 * CiaPan (talk) 07:28, 27 April 2010 (UTC)


 * I understand you want x, y, z all in the interval [b,b2-b], and that a<b are natural numbers. If b=1 this interval is empty: so, no solution if (a,b)=(0,1). If b=2, the interval reduces to the singleton {2}, so the only possibility would be x=y=z=2, which is ok if a=0, otherwise, no solution if (a,b)=(1,2). If b≥3, you always have the solution x=b, y=b, z=a+b, for b≤a+b<2b≤b2-b. So the answer is: there is always a solution for all pairs (a,b) with the exception of (0,1) and (1,2). --131.114.72.230 (talk) 08:54, 27 April 2010 (UTC)

Degree by which data is partitioned
Is there a term for the degree to which data is partitioned by a particular attribute? So, say I have a dataset of objects of different shape, size, colour, texture, etc. and I'm looking for the attribute which will best separate the data into evenly-sized subsets E.g. if 50% of them are red, this would be an excellent way to sub-divide my objects. So, "Is the object red?" would be the question of maximum (something) and I'm looking for the term to use in place of (something). Sorry for the dire description but, as you can tell, I'm no mathematician! --Frumpo (talk) 13:48, 27 April 2010 (UTC)


 * Just a guess, but maybe what you're looking for is information content. Basically you're dividing up the objects into different bins; the information content is a measure of a probability of a particular way of dividing up the objects out all possible ways of dividing them up. The labeling of the bins is arbitrary so that should be taken into account by multiplying the probability by the number of possible labelings. Just how you do this depends on the nature of the problem you have but the upshot should be that putting everything a single bin, or putting everything into a bit by itself results in a probability of 1 giving an information content of 0. Anything in between will have a probability of less than 1 so a positive information content. I don't want to get into a lot of details without knowing more about the specific problem and this is just a guess as I said.--RDBury (talk) 14:52, 27 April 2010 (UTC)


 * In computer programming it's frequently important to break data up into approximately equal "bins". This comes up when sorting or searching a list, for example.  For this reason, many database management systems gather stats on data which can then be used to optimize those functions.


 * For example, let's say we want to search an alphabetically sorted list of names for "Clark". Without any further info, we might start right in the middle.  If we know that names all start with A-Z, we might say that we should start the search 3/26ths (2.5/26ths, technically) of the way down the list, since C is the 3rd letter out of 26.  If, however, we have stored the starting position of each letter, and we know we need to search the C's only, this can shorten the search considerably. If we know the starting position of CL's and CM's, then we can shorten the search even further.


 * This might suggest that the more partitioning we have, the better, but there is a limit. At some point, the partitioning info will take up more space than the data itself, and here we start hitting a level of diminishing returns. StuRat (talk) 16:25, 27 April 2010 (UTC)


 * The direct concept you're asking for, I think, is information gain. Finding ways to partition data to maximize the gain is a standard problem in machine learning.  The articles cluster analysis and statistical classification might be helpful. 69.228.170.24 (talk) 19:17, 27 April 2010 (UTC)


 * Thank you. I think Information Gain is, indeed, the term that I need.  My query was somewhat related to machine learning.--Frumpo (talk) 09:17, 28 April 2010 (UTC)

Non-associativity of substitution in calculus
I am looking for a reference for a particular strange phenomenon in the syntactics of calculus.

Define $$F(a) = \displaystyle\frac{xa^2}{2}$$, which might arise from an integral such as $$\displaystyle \int_0^a xt\,dt .$$

Now, in the usual syntactic notation of calculus,


 * $$ F(a) \Big |_{a = x+1} = \displaystyle\frac{x(x+1)^2}{2}$$

On the other hand,


 * $$ \left ( F(a) \Big |_{a = x}\right) \Big |_{x = x+1} = F(x) \big |_{x = x+1} = \displaystyle\frac{(x+1)^3}{2}$$

So the "vertical bar" operator has a sort of non-associativity, which is obviously related to the concept of free variables. Is there any calculus text that actually examines the vertical bar operator rigorously?

Note: I am not asking why the expressions are not equal; that part is obvious. I am just asking for a reference. The example here is a minimal example of an issue that came up when I was thinking about the fundamental theorem of calculus and the more general formula for differentiation under the integral sign. &mdash; Carl (CBM · talk) 15:04, 27 April 2010 (UTC)


 * This would seem to fit in a logic text better than in a calculus text. And if I were to spend time on this in a calculus class, it would be time spent not teaching calculus. Michael Hardy (talk) 17:17, 27 April 2010 (UTC)
 * Sure, I wouldn't teach this in calc, but its resemblance to logic is only superficial. I started thinking about this when I was preparing material on the fundamental theorem, and realized I couldn't give a 1-second answer to why the fundamental theorem does not apply to this derivative: $$\textstyle\frac{d}{da}\textstyle\int_0^a at\,dt$$. The answer is that $$\textstyle\frac{\partial}{\partial a} at$$ is nonzero. &mdash; Carl (CBM · talk) 20:19, 27 April 2010 (UTC)
 * Why it should apply to that situation is another question one could raise. I've noticed non-mathematicians have an oddly negative way of asking some questions.  If one asks how many prime numbers there are, they might say "Aren't there infinitely many?", as if the absence of a known reason why there aren't infinitely many justifies the conclusion that there are infinitely many. Michael Hardy (talk) 03:22, 28 April 2010 (UTC)
 * Yeah, but they're the ones we have to teach. &mdash; Carl (CBM · talk) 14:36, 28 April 2010 (UTC)
 * I think the problem here is just that "x=x+1" is a contradiction. If you assert a contradiction to be true, you aren't going to get a meaningful answer. Changing x to x+1 is more of a transformation than a substitution. It's not surprising that substitutions and transformations behave differently. --Tango (talk) 18:04, 27 April 2010 (UTC)
 * I'm sorry; the notation I used may not be as well known as I thought. In general, the notation $$E\Big |_{a = \sigma}$$ means to replace every occurrence of the letter a in the expression E with the string &sigma;. As in, $$\textstyle\int_\sigma^\tau f(x)\,dx = \left (\textstyle \int f(x)\, dx \right) \Big|_{x = \sigma}^\tau$$. The reason that the thing in my original post fails is obvious. But the usual story in calc I is that you can work syntactically in a completely naive way and you will still get the right answer. &mdash; Carl (CBM · talk) 20:19, 27 April 2010 (UTC)
 * The notation is standard, but I would only use it for substituting one thing for another. x->x+1 isn't really a substitution. It is clear what the notation is supposed to mean, but I don't think it is wise to use the notation like that. Transforming a variable is a very different operation to substituting one variable for another, so I wouldn't use the same notation for both. --Tango (talk) 00:53, 28 April 2010 (UTC)
 * This case is exactly what is meant by substitution in elementary logic, as Carl explained. I'm not sure what you mean when you insist it is not a substitution. Algebraist 01:08, 28 April 2010 (UTC)


 * How would you compute $$\textstyle\int_{a+1}^{a+3} a\,da = (a^2/2)\Big|_{a = a+1}^{a+3}$$ without this type of substitution? &mdash; Carl (CBM · talk) 14:36, 28 April 2010 (UTC)
 * I wouldn't use the same symbol for a dummy variable as a real one. It would be much clearer if you wrote $$\textstyle\int_{a+1}^{a+3} b\,db = (b^2/2)\Big|_{b = a+1}^{a+3}$$. --Tango (talk) 14:55, 28 April 2010 (UTC)


 * That sort of reminds me of a problem in automatic differentiation called "perturbation confusion". I remember hearing that the book SICM uses Scheme code instead of traditional math notation partly to get around such problems, but I haven't read the book. 69.228.170.24 (talk) 19:53, 27 April 2010 (UTC)


 * Thanks! I had not thought to look into the CS literature on automatic differentiation, but now that you say it I can see how they will run into the same problems. &mdash; Carl (CBM · talk) 20:20, 27 April 2010 (UTC)

I would look into logic for this. Substitutions really is used for a lot in it. I am a bit puzzled as to why you find it surprising that substitutions is non assosiative though. Consider &sigma;=a->b and &rho;=b->a. Then, using prefix notation for substitution, &sigma;&rho;=a->b=&sigma;, while &rho;&sigma;=b->a. This is because, to take &sigma;&rho; we first apply &rho; changing all b's to a's, then apply &sigma; changing the a's to b's. Hence b's gets returned back to b's and a's gets changed to b's.  Taemyr (talk) 07:15, 28 April 2010 (UTC)

I think that the bar operator is the same thing as the lambda calculus (except that your description of it doesn't make it clear what happens if you have a bar inside a bar, both of which substitute the same variable). The two equations you discuss look like this in the lambda notation (I took out the division by two):

$$(\lambda a.xa^2)(x+1) = x(x+1)^2$$

$$(\lambda x.(\lambda a.xa^2) x)(x+1) = (\lambda a.(x+1)a^2)(x+1) = (x+1)(x+1)^2$$

(You can express the second case more simply as $$(\lambda a.xa^2)(x+1)$$)

In any event, the important thing is that thing that comes between the $$\lambda$$ and the period is not a value. (For example, $$(\lambda 2^2.6)(3)$$ is an incoherent lambda expression, just like $$6 \Big |_{2^2=3}$$ is incoherent.) Rather, it's the name of a variable, and its presence causes all of the occurrences of that variable "under" the lambda to become bound variables. To answer your actual question, I'm pretty sure that a rigorous treatment of the bar operator is the lambda calculus, which is extensively studied in programming languages, but I don't know if many mathematicians care that much. But it's an interesting area of study; the lambda calculus has only two constructs (pure lambda calculus does not include arithmetic or anything), and it's still capable of representing any possible computation. Paul (Stansifer) 18:56, 28 April 2010 (UTC)
 * Oops, my attempt at simplification was bogus, because it ignored the x in the inner expression. In fact, the whole point of the problem was that that x should be captured and shadowed by the outer lambda, not free at the top level.  The alpha-renaming rule, however, tells us that we can safely rephrase it as $$(\lambda z.(\lambda a.za^2) x)(x+1)$$, removing the confusing name clash.  Paul (Stansifer) 09:42, 29 April 2010 (UTC)

I think the idea of de Bruijn indexes is that the confusion goes away if you replace the variables in the λ-expressions with them. 69.228.170.24 (talk) 05:20, 29 April 2010 (UTC)