Wikipedia:Reference desk/Archives/Mathematics/2008 May 24

= May 24 =

Drawing Graphs (Graph Theory, not Functions)
I have an adjacency matrix (in Excel 07) from which I would like to generate a graph in order to be able to visualise the data nicely. Is there a program that will generate a graph from the adjacency matrix specified in any format Excel can output (e.g. xls, csv, etc.). I also need a couple of features, specifically my graph is a weighted (but undirected) graph so I would like to be able to visualise these weights either by colour or by thickness of the edge. In addition, it would be really great if the program would allow me to have a colour/size scale for the nodes based on how connected they were or some other number that I could assign to each node.

I have looked at graphviz and the GUI is basically non-existant and it looks like I would have to reenter all the data and I looked at an excel frontend for it but that wasn't very promising either. I also found a program called yEd but that doesn't seem to allow me to enter in an adjacency matrix. I don't want to do it manually because the graph is highly connected and has ~40 nodes. Any ideas?--AMorris (talk)  &#x25CF;  (contribs)  04:42, 24 May 2008 (UTC)


 * Hmmm. Even if you find appropriate graph generation software, will it be of much use to you ? A highly connected graph with ~40 nodes is very unlikely to be planar. Unless the graph has some special symmetries, I would think a diagram is going to be very messy, and visually almost indistinguishable from a complete graph. Gandalf61 (talk) 14:30, 24 May 2008 (UTC)


 * That's true, but I'm more looking to use the graph as just a broad visual representation of the data. The idea is not so much to visualise the existance of the connections, but more the weight of the connections either through colour or line thickness.  Unfortunately all the programs seem to require manual entry or I would need to undergo a steep learning curve to use them.  AMorris  (talk)  &#x25CF;  (contribs)  10:03, 25 May 2008 (UTC)

Connected and Totally Bounded metric spaces
I've just finished a course in metric spaces, and connectedness was covered very briefly. My intuition tells me that every bounded connected space is totally bounded, but I'm having a hard time proving it or finding a counterexample (the only examples I know of a bounded but not totally bounded metric spaces are disconnected). Is this intuition true or false? And how might I go about proving it? --196.210.152.31 (talk) 07:51, 24 May 2008 (UTC)
 * Try looking at infinite-dimensional examples. Have you had any Banach space theory? --Trovatore (talk) 08:25, 24 May 2008 (UTC)
 * No, I have not, but I will take a look. Thank you. --196.210.152.31 (talk) 08:36, 24 May 2008 (UTC)
 * More directly, you can make any metric space bounded without making it totally bounded or changing its topology in the slightest. Just define a new distance to be the minimum of the old distance and 1. Algebraist 11:06, 24 May 2008 (UTC)

OK, let's take a stab at finding a counterexample. Start with a countably infinite metric space in which the distance between any two points is 1. Clearly bounded, not connected, and not totally bounded. Now let's connect it: between any pair of points a and b, put a continuum that looks like the interval (0, 1) between them, so you've got a closed interval with a at one end and b at the other, and let the distance be the ordinary metric on [0, 1]. Now it's connected, but what is the distance between a point between a and b and a point between b and c, and what is the distance between a point between a and b and a point between c and d? Maybe if you can figure out reasonable answers to those questions, you'll get a space that is bounded (no distance greater than 1), connected, and not totally bounded. Michael Hardy (talk) 14:34, 26 May 2008 (UTC)

Surprise!
I have a logic problem that's been bothering me for years - it comes in the form of a true story:


 * I had a bet with my sister (which I lost) my penalty was that I had to take her to lunch. I ask her when we should meet up.  Being a fun-loving person, she says "Well, today is Sunday - I'm going on vacation on Saturday morning so it has to be before then.  Surprise me!  Just show up at work around noon someday this week and we'll do lunch...but I want to be surprised - don't tell me you're coming, just show up."

So I could take her on Monday, Tuesday, Wednesday, Thursday or Friday. Let's stipulate that I always keep my promise, we're both intelligent, we think very much alike. Is it possible to surprise her?

On the face of it, it seems easy - I just need a random number. So maybe I should roll a dice, 1=Monday, 2=Tuesday...5=Friday, and on a 6, I re-roll. Easy - right?

But there's a problem: If I were to happen to roll a 5 (Friday - the last possible day) then on Friday morning my sister will think to herself: "He didn't take me to lunch yet - and he promised to do it this week - so today MUST be the day"...so it's not a surprise when I show up on Friday lunchtime - which means that Friday cannot be an acceptable result.

OK - so I have to choose between 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday and re-roll the dice if I get a 5 or a 6. But now we have the same problem with Thursday. She knows (because we've stipulated that she thinks exactly like me) that Friday wouldn't be a surprise - so it's impossible. Hence: on Thursday morning, she knows that I can't leave it until Friday to take her to lunch because that wouldn't be a surprise - so it must be happening today...so again, it won't be a surprise. That means that Thursday is impossible too.

Sadly, now that Thursday is out of the question - so is Wednesday...and that means that Tuesday is impossible...and that only leaves Monday - and that won't be a surprise because it's the only possible day remaining.

Hence no day is truly (logically) a surprise.

This seems like a bogus argument - but I can't find a logical hole in it. Is it true that it's impossible to truly surprise someone under these circumstances?

70.116.10.189 (talk) 15:29, 24 May 2008 (UTC)
 * This problem is well known as the unexpected hanging paradox. That article explains some approaches that have been attempted to resolve the problem. Algebraist 16:13, 24 May 2008 (UTC)


 * Cool - it's good to know the mathematicians are earning their keep answering everyday problems! Many thanks - I'm off to read it carefully. SteveBaker (talk) 17:27, 24 May 2008 (UTC)


 * That's not fair. It is unreasonable to expect someone to assume the OP knew about the article after they'd typed out that scenario in such detail. Zain Ebrahim (talk) 17:41, 24 May 2008 (UTC)


 * Introduce a tiny bit of uncertainty. "Dear, dear sister.  Please keep in mind that there is always the smallest probability that even the best made and most carefully thought-out promises cannot be kept (due to unexpected hospital stays for example), and that despite best efforts, it is always possible that I might have to take you to lunch after you get back from your trip."  Then take her Tuesday.  --Prestidigitator (talk) 19:11, 24 May 2008 (UTC)


 * The way I would look at it is like this: Since every day has been ruled out, there is the same probability (albeit zero) that it will occur on any day of that week.  Since it must occur (because you always keep your promises), and it has an equal probability of occuring any day, then there is no certainty as to when it will occur or will not occur.  Thus, any day (except Friday) it would be a suprise.  If you take them any day but Friday, they don't know if you'll wait until the next day or not.  (This is the first time I have heard of this paradox, so that may be flawed logic.) Ζρς ι'β'  ¡hábleme! 19:19, 24 May 2008 (UTC)


 * But his/her sister is also intelligent (they think alike) so when she gets up on Thursday, she'll know that it has to be the day (because Friday is out) and therefore it's not a surprise. And the same reasoning can be applied to Wednesday, Tuesday and Monday. Zain Ebrahim (talk) 19:36, 24 May 2008 (UTC)


 * Ah, yep, I got it now. I thought about it one way and it was logical, then the other way, and it was illogical.  Personally, I like Predtigitator's idea. Ζρς ι'β' ¡hábleme! 20:19, 24 May 2008 (UTC)


 * Reduce the problem to two days (Thursday and Friday). Say your sister expects you to take her out on Thursday (since obviously you can't do it on Friday). But lunchtime comes and goes and you don't show up. Now what? If we allow her to expect to be taken out on Friday also, then your argument is valid, but all it shows is the trivial fact that if she expects to be taken out every day then she won't be surprised on the day that she is. If we don't allow her to expect to be taken out every day, then your argument breaks down immediately; you can surprise her on any day, including Friday, because there's a good chance she'll have blown her one chance at unsurprise before then.


 * That's one solution, anyway; it's not the only one because there's more than one way of interpreting English words like "expect". You can turn it into a problem about the slippery nature of belief (can your sister really make herself believe in Thursday after having reasoned as I just did?), but then it's a problem of psychology rather than mathematics. You can replace "expect" with "logically deduce" and turn it into a problem of formal logic. The article is misleading when it says that "no consensus on its correct resolution has yet been established", since one doesn't expect a consensus on the meaning of an utterance in the English language. If you ever find yourself in the condemned prisoner's position, keep in mind the parable of the dagger. Not that it'll save you from your fate. -- BenRG (talk) 22:08, 24 May 2008 (UTC)


 * I don't follow. When you say that, are you referring to:
 * the fact that she can't expect to be taken out and be surprised, or
 * if he/she doesn't take her out on Thursday, she'll expect to be taken out on Friday?
 * In either case, I think that we have to allow that. Zain Ebrahim (talk) 22:23, 24 May 2008 (UTC)


 * I reworded the paragraph to try to clarify it. It would probably have made more sense if I'd used "predict" instead of "expect", since that implies more certainty. It doesn't sound fair to predict that the lunch will happen every day and then claim to be vindicated when it finally does. It sounds somewhat fairer to expect it every day, and I'm happy to allow her to, but the problem becomes trivial if we do. It's all about the vagueness of the terms. Replace "expect" with "know". Define "know" such that one can't know something that doesn't come to pass, i.e. there are no possible worlds where your sister knows you're going to take her to lunch on day N and you don't do so. Then she can't know that you'll take her to lunch on any day, even Friday, since there's no law of physics compelling you to do so; thus you can fulfill the bargain by taking her out on any day, even Friday. Alternatively we could define "know" such that there are no worlds where all three of the following are true: she knows you'll take her to lunch on day N, you don't do so, and you ultimately fulfill the bargain. In that case she can know on Friday but not on any earlier day, since on Thursday she's forced to acknowledge the existence of a possible world where (for some reason) she doesn't know on Friday and you take her out then. Or we could define "know" even more broadly by allowing her to eliminate possible worlds based on plausible assumptions about her own future behavior. In that case she can (choose in advance to) "know" you'll take her to lunch every day, and so you can't fulfill the bargain. Once you define your terms precisely enough the problem can be solved, and the solution depends on the definition. If you don't define your terms and try to reason intuitively about your knowledge/beliefs/predictions then you're doomed, because they'll just keep flip-flopping. -- BenRG (talk) 00:10, 25 May 2008 (UTC)


 * Thanks for the link! The parable of the dagger has some surprising applications... For example, when faced with the Monty Hall problem, no longer do you have to rely on mere probabilities - once you have two doors remaining, you simply put two post-it notes on the doors, one of them saying "either both post-it notes are true or both are false" and the other saying "there is a prize behind this door". Once you have done this, it is logically impossible that there is no prize behind the latter door and you can safely choose it.  I never knew logic could be so practically useful. 84.239.133.86 (talk) 19:51, 26 May 2008 (UTC)


 * Was the criterion that she can choose the day for the dinner also in the bet? If you promised in the bet that you'd take her to a dinner, but then later you just casually asked what day would be the best for her, then I think it might not be a break of your promise if you took her to a dinner but not at the day she said would be the best for her.  It was nice of you that you asked her because obviously it would be bad if she had some other occupation at the time you scheduled the dinner, but that she can choose some almost impossible criterion for the day wasn't really part of the penalty.  On the other hand, you could also try to show up on any day and surprise her in some way other than by the choice of the day.  &#x2013; b_jonas 11:35, 25 May 2008 (UTC)


 * Take her out on Friday. Monday morning, she will think that you have thought through the puzzle and will take her out on Monday because it is the "most" surprising.  Tuesday morning, she will think that you are being clever and waiting until Tuesday.  On Wednesday she will be yet more expectant, and on Thursday she will be 100% sure that that is the day.  On Friday she will think you have forgotten.  This way, you surprise her 5 times instead of just once.
 * BenRG's argument is very persuasive... also, that "parable of the dagger link" is great.
 * Where is your sister going? If it's very close, then you could contrive to take her to lunch on Saturday.  (And if to you the week starts on Monday, Sunday is also an option.)  Eric. 81.159.33.9 (talk) 17:53, 25 May 2008 (UTC)


 * If the original quote is exact, there's a whole world of opportunity. Don't think so small. For instance, you could hide above a ceiling tile the night before, and drop down behind her when she isn't looking. That'd be a surprise. You might have to make a deal with security ahead of time. Black Carrot (talk) 05:58, 30 May 2008 (UTC)

Finding Nash equlibrium
Hi. I have functions $$f_i : \mathbb{R}^p \to \mathbb{R}$$, for $$1 \le i \le p$$. I am interested in finding the Nash equilibrium of the functions, that is, a point $$x = (x_1,\ldots,x_p) \in \mathbb{R}^p$$ such that for every $$1 \le i \le p$$ and every $$t \in \mathbb{R}$$ we have $$f_i(x_1,\ldots,x_{i-1},t,x_{i+1},\ldots,x_p) \le f_i(x)$$. Does anyone know a good algorithm for this? Ideally it should only evaluate the functions themselves and not their derivatives.

The naive method is of course to optimize each coordinate separately in every iteration. However, this can fail to converge, and at best converges linearly. I tried a basic google search, but could only find discussions of Nash equilibria for mixed strategies from a discrete space (then again, I have not yet fully mastered the most important skill of the modern world).

So, any suggestions? Thanks. -- Meni Rosenfeld (talk) 20:06, 24 May 2008 (UTC)


 * The first idea that comes to my mind is to think of iterated elimination of dominated strategies: You may discard for each coordinate j the set $$y\in \mathbb{R}$$ such that for all $$\mathbf{x}_-i \in \mathbb{R}^{p-1}$$ there exists $$x\in \mathbb{R}$$ for which $$f_i(\mathbf{x}_-i,x)>f_i(\mathbf{x}_-i,y)$$. In this way, you are dismissing sections of the entire space (or a hypercube if the domain is somehow restricted) that will never host a NE. If you replace the ">" for $$\ge$$, then you will (eventually) converge to a NE, though which NE you'll find may depend in the order of coordinates in which you have iteratively discharged strategies. This idea may not be useful in general, depending on the structure of the utility functions fi.
 * I know you don't want to talk about conditions in derivatives, so I assume that the fact that for an interior NE $$\frac{\partial f_i}{\partial x_i}=0$$ for all i isn't of much help in your case. Hope anything of this helps. Pallida  Mors  15:17, 26 May 2008 (UTC)
 * Thanks. Unfortunately, I am not sure about how to go about eliminating dominated strategies in practice. My functions are quite complicated and I see no way to analyze them symbolically; I treat them as an oracle to which I can provide an $$x \in \mathbb{R}^p$$ of my choice and which, at a significant computational cost, outputs the values. I was hoping for an algorithm reminiscent of the secant method for root-finding (in the sense of using nothing more then algebraic operations on the values of the function). If there are only good algorithms which require derivative evaluation, I could try to find an expression for the derivatives, but that would be a challenge on its own, and evaluation of those derivatives will be even more expensive. -- Meni Rosenfeld (talk) 16:19, 26 May 2008 (UTC)
 * If a function f is merely an oracle for answering f(x) when asked x, then you cannot assume continuity, and even the simple equation f(x) = 0 cannot be solved by any method except trial an error. The secant method assumes continuity. Bo Jacoby (talk) 16:54, 26 May 2008 (UTC).
 * My access to f is through an oracle. That doesn't mean that the function f itself cannot have nice properties. In particular, my functions are continuous and almost everywhere differentiable, and I'll be happy with an algorithm that assumes the functions are smooth. -- Meni Rosenfeld (talk) 17:18, 26 May 2008 (UTC)
 * It looks as not an easy task. Speaking about the secant method, how about the following: Fix an initial vector $$\mathbf{x}^0$$, and iterate following $$\mathbf{x}^{n+1}=\mathbf{x}^n+k(\varphi(\mathbf{x}^n)-\mathbf{x}^n)$$
 * ...where k is a parameter of speed of adjustment of the procedure, and $$\varphi(\mathbf{x}^n)$$'s coordinates $$\varphi_i(\mathbf{x}^n_{-i})$$ are the best responses of each player (function) to the remaining coordinates of the vector? In case it's really costly to find, you may pick a representative &phi;i which at least represents the correct sign of the variation. The greater k is, the faster the algorithm will arrive at a NE, but the chances are greater that the algorithm fails to converge. If it converges at all, it clearly does to a Nash equilibrium. Pallida  Mors  04:51, 27 May 2008 (UTC)
 * I see. So the naive method will be a special case with $$k=1$$. This is indeed the kind of algorithm I had in mind, and it does seem to solve the problem of a repelling equilibrium. If implemented property, its performance will probably be good enough, but I'll be happy to hear any other suggestions. -- Meni Rosenfeld (talk) 17:45, 27 May 2008 (UTC)
 * Do you know if fi(x) are analytic functions? Note that if g is an increasing function, then g(fi(x)) can be used instead of fi(x). This may lead to a simplification of the problem. Bo Jacoby (talk) 09:27, 27 May 2008 (UTC).
 * It depends. For some of my applications the functions will be analytic, for some they will have manifolds of nondifferentiability (Just to demonstrate what I mean, they will be, smoothness-wise, similar to $$f(x,y)=\left\{\begin{array}{ll}x^2+y^2&x^2+y^2\le1\\2(x^2+y^2)-1&x^2+y^2>1\end{array}\right.$$ which is continuous everywhere but has a circle of nondifferentiability).
 * I can think of no monotonous function to apply to simplify the problem. -- Meni Rosenfeld (talk) 17:45, 27 May 2008 (UTC)


 * Not sure if it's at all relevant to your question, but it has been shown that finding Nash equilibria is PPAD-complete.  I was at a talk by Christos Papadimitriou yesterday where he mentioned this result. Oliphaunt (talk)
 * Nice to know, but it's not so relevant, as the paper seems to discuss the case of mixed strategies over a discrete space. -- Meni Rosenfeld (talk) 12:06, 28 May 2008 (UTC)
 * The problem of mixed strategies over a discrete strategy space is obviously a special case of pure strategies over a continuous (possibly multidimensional) strategy space. If you insist on each player's strategy being a single real number, you need an injection from $$\mathbb{R}^n \to \mathbb{R}$$, such as the inverse of a space-filling curve, but this may leave you with somewhat ugly payoff functions.  Still, it at least suggests that your problem may be hard in the general case.  —Ilmari Karonen (talk) 13:45, 28 May 2008 (UTC)

Roots of $$f(t)$$
I’d like to check my understanding here. I’m trying to write the rth roots of a function of $$t$$ in a concise manner. Specifically say I want to write the set of all rth roots of $$f(t)$$. Is it correct to write $$\left \{ \zeta_{r}^{n} \sqrt[r]{f(t)} : 0 \leq n \leq r-1 \right \} $$ where $$\zeta_r$$ is an rth root of unity, and $$\sqrt[r]{f(t)}$$ is some rth root of $$f(t)?$$ GromXXVII (talk) 21:51, 24 May 2008 (UTC)
 * Mostly yes. However:


 * 1) $$\zeta_r$$ must be a primitive rth root of unity, not just any root.
 * 2) If t was fixed you would just have $$f(t)=a$$, and then an adequate description of the set of all rth roots of a would be "$$\left \{ \zeta_{r}^{n} \sqrt[r]{a} : 0 \leq n \leq r-1 \right \} $$ where $$\zeta_r$$ is a primitive rth root of unity, and $$\sqrt[r]{a}$$ is some rth root of $$a$$", because there are only a finite number of choices and it doesn't matter which one you take. However, if you want this to apply to a variable t - that is, define a function $$R : \mathbb{C} \to \mathcal{P}(\mathbb{C}),\ t \mapsto \left\{  \zeta_{r}^{n} \sqrt[r]{f(t)} : 0 \leq n \leq r-1 \right \}$$ - it gets a little trickier. Now "$$\sqrt[r]{f(t)}$$ is some rth root of $$f(t)$$" comprises an infinitude of choices, and with the way you have you have phrased this, it is not immediately obvious that this is at all possible. In this particular case it is of course a non-issue as providing an explicit choice function is trivial, but this goes to show you that the description is not perfect. A possible fix is to replace "$$\sqrt[r]{f(t)}$$ is some rth root of $$f(t)$$" with "$$\sqrt[r]{}$$ is some branch of the rth root" (which we take to be known to exist).
 * 3) If you want to be even more precise, you can specify explicitly that n must be an integer.
 * All that said, if all you want is a concise notation for the set of roots, why not just write $$\{z \in \mathbb{C}:z^r=f(t)\}$$? -- Meni Rosenfeld (talk) 22:16, 24 May 2008 (UTC)
 * In response to your last point: That's a set of complex numbers which depends on t, the OP seems to be looking for a set of functions. If you have a fixed t, then you're ok, but I'm assuming that's not the case the OP is interested in (if it was, why mention f at all, and not just give f(t) its own name?). --Tango (talk) 22:57, 24 May 2008 (UTC)