Wikipedia:Reference desk/Archives/Mathematics/2006 October 14

I love my Turing Machine!
I have a task to complete which is to design a Turing Machine which will halt on all inputs from a certain language and halt and accept only on inputs from a particular subset of this language. I should mention that in the course I'm taking we've taken the option of considering a TM as a set of rules of the form {READ STATE, READ SYMBOL, CHANGE TO STATE, CHANGE TO SYMBOL, GO LEFT OR RIGHT OR PAUSE} with a "start" state and an "accept" state, the machine starting on the "left-most" letter of the input word, and the machine halting without accepting if it encounters a symbol in a certain state for which there does not exist a rule.

Anyway, I've completed the task, and designed a TM. I'm happy with it - I think it's quite cute in fact. And I should think anyone reading my rules with the accompanying commentary would be happy too. What I would like to do is make a systematic and complete justification of my TM - i.e. prove that it does exactly what it's meant to. The trouble is, when I start writing about it I end up with paragraph upon paragraph of flowery prose and going round in circles. This is silly. It's a simple machine really and is dealing with a simply defined subset of a simple language.

So, my question is this: does there exist a standard approach to justifying a machine of this type which would be concise and rigorous? It's bugging me since I love my machine and it clearly (to me) does its job but I end up going round in circles trying to justify it.

Thanks --87.194.21.177 01:04, 14 October 2006 (UTC)


 * There's no systematic way to prove a Turing machine does what it's supposed to do. As you know, there's no general method for even proving that a Turing machine terminates.  And writing proofs is hard.
 * Can you do a structural induction proof to show your machine works? That is, show it works for simple cases, and then any expansion of those simple cases must also work?
 * In any case, you might have to do two proofs - first to show that the machine accepts everything it's supposed to, and a second proof to show that it rejects everything it's not supposed to. --Robert Merkel 01:19, 14 October 2006 (UTC)


 * Assuming that the TM design is indeed simple, I expect the structure of a nice proof to depend mostly on the nature of the language and its subset. Does the head move to and fro? Is auxiliary data written to the tape? A possible approach may be to come up with an invariant: a predicate that is true for the initial configuration and is preserved by each step, and that implies for a final configuration that the machine is halted in the right acceptance condition. This has to be supplemented with a variant function, a mapping from the space of configurations to some domain with a well order, such that the function value decreases on each step. If a proof can be given at all, it can be put in this form. --Lambiam Talk  01:33, 14 October 2006 (UTC)


 * As a matter of the fact, I would have felt fairly sure that the langie and its subset are regular, if it were not for the phrase 'halts on all input', which seems to indicate that no word in the language has a proper left subword. Now, it might still be regular, of a very simple structure; e.g., there could be one letter that stands at the end of each word, but only there.  If you may strip down the verification to the recognition of a regular language, you may construct a finite automaton doing this, and then verify that the Turing machine accepts the same language.
 * Of course, this is not very helpful, unless you have learned how to relate a finite automaton with a regular language. JoergenB 04:08, 14 October 2006 (UTC)

Thanks guys. I don't know about regular languages - maybe I'll dip a toe. Meanwhile I think I'll start thinking along the other lines suggested. I think that, as Robert says above, "writing proofs is hard" is the essential truth I am struggling with as per usual. Thanks again. --87.194.21.177 15:37, 14 October 2006 (UTC)

Elasticity of substitution
I understand that the Elasticity of substitution is the % change in x/y over the % change in the rate of technical substitution, or MRS, but then I become confused when it is simplified to: d ln x/y over d ln f(x)/f(y).

Could someone state this in terms of operations, or what I should calculate if all I have is their production function and conditional input demand functions? Thanks, ChowderInopa 02:03, 14 October 2006 (UTC)


 * Hmm...let's try this: take some variable x, and some function of that variable y(x).


 * $$\frac{d \ln y(x)}{dx} = \frac{d \ln y}{dx} = \frac{d}{dx} \ln y = \frac{1}{y} \frac{dy}{dx}$$. Does this help? --HappyCamper 04:30, 14 October 2006 (UTC)


 * If it helps you: interpret "d ln x/y over d ln f(x)/f(y)" as: (d (ln x/y) / dx) over (d (ln f(x)/f(y)) / dx), that is, as the quotient of two ordinary d/dx forms. --Lambiam Talk  11:10, 14 October 2006 (UTC)
 * Further, the notation "f(x)/f(y)" is misleading. This should be something like "Ux/Uy", the ratio of two partial derivatives of the utility function U in x and y. --Lambiam Talk  11:24, 14 October 2006 (UTC)

Lambiam, you are right, the bottom is the partial derivatives of the utility function, my bad. I'm still not understanding though...

I am taking the derivative of the ln(x/y) with respect to what? and then, I am dividing this by the derivative of the ln of the ratio of the partial derivatives of the Utility function??? Help! ChowderInopa 23:57, 14 October 2006 (UTC)


 * I don't know the meaning of the variables here, but in any case, for the formula to make sense, there has to be something coupling x and y, making them dependent on a common variable. It could be that y is a function of x, or x and y both are functions of t (say time). If x and y both depend on a common variable (like x or t), just take the derivatives w.r.t. that variable. --Lambiam Talk  18:59, 15 October 2006 (UTC)

Permutations
hello.can you pls give me instant tricks of solving permutations...
 * Hello, our permutation article tells : It is easy to count the number of permutations "P" of size r when chosen from a set of size n (with r ≤ n).
 * The formula is P(n,r) = n! / (n-r)! where "!" means factorial.
 * Our factorial article tells that n! = Π k=1, n, k) where Π "capital Pi" means multiplication, giving n! = 1*2*...(n-1)*n.
 * One easy trick I'm thinking of is just solving with your brain when, say, n < 10, else using a spreadsheet or any computer algebra system . Does that help ? -- DLL .. T 18:12, 14 October 2006 (UTC)


 * Can you explain what you mean by "solving permutations"? Could you give an example of how to solve one (simple) permutation? --Lambiam Talk  20:15, 14 October 2006 (UTC)

Linear vector spaces
I am learning about linear vector spaces at university at the moment, and was wondering about slightly odd bases, such as {sin(t), cos(t)}. What are the coordinates of the function f(t) = 3 sin(t) + 5 cos(t) with respect to these basis {sin(t), cos(t)}, for example. Is the answer simply (3,5)? Batmanand | Talk 13:27, 14 October 2006 (UTC)


 * Yes. The coordinates are just the amounts you need to multiply every vector in the base and add together to get your desired vector. Note that you need an ordered base, that is, (sin, cos) and not {sin, cos}. -- Meni Rosenfeld (talk) 13:37, 14 October 2006 (UTC)
 * Your last statement is merely a matter of taste.--gwaihir 13:59, 14 October 2006 (UTC)
 * Taste? How can you tell if (3, 5) means 3 sin(t) + 5 cos(t) or 3 cos(t) + 5 sin(t) if the base is {sin(t), cos(t)}, which is of course equal to {cos(t), sin(t)}? -- Meni Rosenfeld (talk) 14:02, 14 October 2006 (UTC)
 * If you use unordered bases, sets of co-ordinates are functions on the basis, so it's not (3, 5) but rather (sin &rarr; 3, cos &rarr; 5) (in either order).--gwaihir 15:27, 14 October 2006 (UTC)
 * Very well, but he was specifying his expectation to have (3, 5) as the coordinates vector, which can only work if the base is ordered. So, in general it is a matter of taste, but not within the context of this question. -- Meni Rosenfeld (talk) 18:34, 14 October 2006 (UTC)
 * Batmanand, just to make sure you made a typo and didn't commit one of the beginner's most common mistakes, when you wrote these basis: {sin(t), cos(t)} is one basis. The basis consists of two basis vectors (which in this case may be called basis functions).  You'll soon meet situations where you consider several bases simultaneously, but then you should notice that that means several collections of basis vectors. JoergenB 14:42, 14 October 2006 (UTC)
 * Lol you do me too much of a service; I have only just started to course and did indeed make the elementary error you mention. Thanks for pointing it out. Batmanand | Talk 16:20, 14 October 2006 (UTC)


 * And it's bases in plural, isn't it? Of course, when unsure of the number, you could write these basis and say that one or the other was just a typo. ;-) In any case, I'm not very fond of writing vectors as a sequence of coordinates (like "$$(1,5,3)$$"), because I seem to never know which basis I (or someone else) was referring to anyway. If my example was implying the basis vectors $$\mathbf{\hat{x}}$$, $$\mathbf{\hat{y}}$$ and $$\mathbf{\hat{z}}$$ (in that order) I prefer $$\mathbf{\hat{x}}+5\mathbf{\hat{y}}+3\mathbf{\hat{z}}$$. This way you can also mix basis vectors from different bases in the same expression. —Bromskloss 19:59, 15 October 2006 (UTC)


 * Not to mention what happens if the vector space isn't of finite dimension. You'd have a hard time writing out the whole coordinate sequence then, whereas by using the other way you can use a summation symbol. —Bromskloss 20:04, 15 October 2006 (UTC)


 * I don't think that's really relevant; if you can write $$\sum_{i=0}^\infty f(i)\hat e_i$$, surely you could also write the sequence (not the set) $$\{f(i)\}_{i=0}^\infty$$ of coordinates? Moreover, for an uncountable basis, there's no way to write the sum (although in some cases an integral can be substituted), but you can still use a coordinate-like function $$f(t)$$ of some continuous variable t (see, for example, Fourier transform and Laplace transform, although I'm not sure they can precisely be called bases).  --Tardis 15:20, 16 October 2006 (UTC)

d2/dt2 = H d2 / dX2
d2/dt2 * ß = H * d/dx * ß, Solve for H205.188.116.136 14:36, 14 October 2006 (UTC)


 * What is ß and what is H? Functions? Constants? --HappyCamper 15:32, 14 October 2006 (UTC)


 * And what's with the asterisk? Some kind of convolution operation? --Lambiam Talk  15:45, 14 October 2006 (UTC)


 * Assuming you mean
 * $${ \partial^2 \over \partial t^2 } \beta = H { \partial \over \partial x } \beta\,,$$
 * it might help to apply the method of separation of variables; compare how the wave equation is solved for the case of one space dimension. --Lambiam Talk  16:06, 14 October 2006 (UTC)


 * Well, if they're functions there's simply no general solution. Separation of variables can't be applied if the solution is inseparable, and there's no reason to automatically assume it is. To solve it, you'll need some conditions. --BluePlatypus 01:33, 18 October 2006 (UTC)

alpha level increase power?
Hi:

I am sitting on my desk, trying to figure out the following statistics problem: if the alpha level increases power also does. Now, I wonder whether power can be less than the alpha level for a hypothesis test. I think it cannot be less as they are somewhat related and depend on each other. But I am not 100% sure. Does anyone know anything that would illuminate my mind and refresh my brain cells a little? I would be thrilled. Thanks much. Hersheysextra 16:15, 14 October 2006 (UTC)


 * For the moment I can only say that if that were to turn out possible for some test, then it is a strange test. I tried to see if the impossibility follows from the usual requirement of the test statistic being a sufficient statistic, but the (presumably trivial) proof eluded me. --Lambiam Talk  23:41, 14 October 2006 (UTC)


 * As I understand it, power is 1 - beta, where beta is the risk of a Type II error, while alpha is the risk of a Type I error. Thus power and alpha aren't directly related. However, alpha would normally be less than 1 - beta, as both alpha and beta are usually small.--86.136.126.188 17:36, 17 October 2006 (UTC)


 * The "alpha" is the probability of type I error, which is the probability of rejecting your null hypothesis when it is in fact true. The "power" is (1-P(type II error)), or the probability of rejecting your null hypothesis when it is false. So:

$$\alpha > \beta \,$$ $$\Rightarrow P_{H_0}(accept H_1) > P_{H_1}(accept H_1)$$
 * means you are more likely to reject your null hypothesis when it is true than when it is false – which is obviously a pretty stupid thing to do. It is entirely possible (it might take me a few minutes to come up with a specific case, but it just depends on the critical value you choose for your test) but if you come across test that has this characteristic, it would be a smart move not to use it ;-) – AlbinoMonkey (Talk) 07:43, 18 October 2006 (UTC)

significance of e
There is a definition for e in wikipedia. Assume the following.

y=f(x); y=e^x; y'=x(e^(x-1));

here, acceleration of y is always equal to velocity of x. Does this point that - acceleration of y is equal to velocity of x - hold good for only e or does that hold good for all integers?


 * Um, you seem to have differentiated y with respect to e, not x; I don't think that's a meaningful thing to do. Melchoir 17:02, 14 October 2006 (UTC)


 * $$e^x =\frac{x^0}{0!} +\frac{x^1}{1!} +\frac{x^2}{2!} + ... + \frac{x^n}{n!} + ...\,$$


 * $$\frac{d(e^x)}{dx} = 0 + \frac{1x^0}{1!} + \frac{2x^1}{2!} + ... + \frac{nx^{n-1}}{n!} + ...\,$$


 * $$\frac{d(e^x)}{dx} = \frac{x^0}{0!} + \frac{x^1}{1!} + ... + \frac{x^{n-1}}{(n-1)!} + ...\,$$


 * $$\frac{d(e^x)}{dx} = e^x\,$$ Richard B 18:06, 14 October 2006 (UTC)

Visual Analog Linear Scale
When it was first invented ? 124.109.18.18 20:17, 14 October 2006 (UTC)


 * There is an article: Sriwatanakul K, Kelvie W, Lasagna L, Calimlim JF, Weis OF, Mehta G., "Studies with different types of visual analog scales for measurement of pain." Clinical Pharmacology and Therapy 1983; 34(2):234-9. An abstract is here. I can't tell whether this presents the actual invention, but I think this article was instrumental in making them popular, at least for eliciting pain intensity judgements from patients. --Lambiam Talk  23:35, 14 October 2006 (UTC)