Wikipedia:Reference desk/Archives/Mathematics/2007 December 5

= December 5 =

Extensions, Polynomials, etc
How do you pronounce things like Z[x] and Q(x,y)? Black Carrot (talk) 21:27, 5 December 2007 (UTC)


 * Generally, I use pauses that make it pretty clear where the brackets are. For example (x+y)-z would have (x+y) pronounced as a phrase, with a pause before -z. If there's much chance of it being ambiguous, I'd say the brackets ('open brackets' etc.). D  aniel  (‽) 21:49, 5 December 2007 (UTC)
 * I sometimes do that, and sometimes say things like 'Z adjoin x'. You (might) have to be more careful with the second example, of course, due to the difference between Q(x,y) and Q[x,y]. Algebraist 22:16, 5 December 2007 (UTC)
 * In situations where it could be unclear, (in my limited experience) we always said 'Z bracket x' for the first, while reserving adjoin (as in 'Q adjoin x and y') for fields. 134.173.93.150 (talk) 05:34, 6 December 2007 (UTC)
 * If by $$Q(x,y)$$, you mean that Q is a function of x and y, I think it would be said, "Q of x and y". Strad (talk) 00:14, 6 December 2007 (UTC)
 * In the context, I think we're talking about the field of rational functions in two indeterminates over $$\mathbb{Q}$$. That's certainly what I was talking about. Algebraist 00:25, 6 December 2007 (UTC)

We are. So, they're usually, "bracket", or "adjoin"? Black Carrot (talk) 14:38, 8 December 2007 (UTC)

What's wrong with this proof?
Quite some time ago, I found the following proof on the internet: 

Since I'm fairly sure that 2 != 1, this is probably wrong, but I can't figure out where the mistake is made. Any ideas? Thanks in advance. Horselover Frost (talk) 23:05, 5 December 2007 (UTC)


 * Between 4 and 5, you divided both sides by X-Y, which is 0 according to the initial assumption. Dividing by zero makes funny things happen. 69.246.218.176 (talk) 23:10, 5 December 2007 (UTC)
 * There's also another error: From 6 to 7, you divide by Y without knowing if it is 0 or not. The solution to $$2y=y$$ is not $$2=1$$ but $$y=0$$. -- Meni Rosenfeld (talk) 23:13, 5 December 2007 (UTC)
 * (ec) Once again we see that Wikipedia has an article on everything. Check out invalid proof for this plus a number of more cunning ones. Btw, this proof works in the trivial ring, in which you can divide by zero. Fortunately, in this case, 1 does equal 2. Algebraist 23:18, 5 December 2007 (UTC)

Limiting behaviour of Markov Chain
Hi I have a stochastic matrix which represents a Markov Chain. The Markov chain basicly describes the probabilities of a simple game. The game involves 2 people A,B, with 5 counters, at each round theres a probability p that A gains a counter and a probability (1-p) B wins one of A's. I want to find limit of the probability matrix. Here is the matrix:

$$\begin{array}{cccccr}

1 & 0 & 0 & 0 & 0 & 0 \\ (1-p) & 0 & p & 0 & 0 & 0 \\ 0 & (1-p) & 0 & p & 0 & 0 \\ 0 & 0 & (1-p) & 0 & p & 0 \\ 0 & 0 & 0 & (1-p) & 0 & p \\ 0 & 0 & 0 & 0 & 0 & 1

\end{array}$$

(where the column index, from 0, is the number of counters A has) I've found, using eigenvectors, that a solution is 6 rows of (alpha 0 0 0 0 beta) where alpha and beta are chosen arbitrarily but is there any way of finding their exact values? I've not look too much at this topic but i am very interested so any help is appreciated. Thanks 212.140.139.225 (talk) 23:58, 5 December 2007 (UTC)
 * This exact question (in slightly greater generality) is answered at Gambler's ruin. Note that you don't need to think about eigenvectors: it's obvious (I suppose formally you'd appeal to a Borel-Cantelli lemma or something) that eventually all the counters are in either A's hands or B's, so the only question is, given that A starts with n counters, what is the probability of A ending up with everything. If one denotes this pn, one gets some recurrence relations in the pns, which are fairly easy to solve, giving the answers in the article I linked. Algebraist 00:17, 6 December 2007 (UTC)

Thanks for you reply. I'm only in my first year of a degree so I haven't come across Borel-Cantelli lemma and i was told the limting matrix can be made up of the eigenvector where the eigenvalue is 1. I see it is obvious that eventually one of the players will win but i thought using eigenvectors may tell me the probability of each player winning. I've had a look at the article you suggested but don't fully understand it, bearing in mind i am only a first year, could you explain the basic idea of recurrence relations? Thanks again 212.140.139.225 (talk) 15:50, 6 December 2007 (UTC)


 * The lemma I referred to is just the first thing that came to my head for proving rigorously that the game eventually ends: if you can see that this is obvious, then that's certainly good enough for a first year. The problem with using eigenvectors is that this only tells you the possible limiting distributions, which you knew already. You have to do more work to find out the probability of one result rather than the other. Let then pn be the probability that A wins starting with n counters. We have boundary conditions P0=0 and P5=1, since in these cases the game has ended already. For n strictly between 0 and 5, there is a probability p that A will gain a counter (giving him n+1 in total) and a probability 1-p that he will lose one (giving him n-1). We thus have the recurrence relation $$P_n=pP_{n+1}+(1-p)P_{n-1}$$. Standard techniques (given in Recurrence relation) allow us to solve this relation with these boundary conditions to obtain Pn for all n. Algebraist 18:12, 6 December 2007 (UTC)