Wikipedia:Reference desk/Archives/Mathematics/2010 January 15

= January 15 =

Is there a name for this property of sets?
Is there a name for the property of sets that its union set is itself? For example, the natural numbers under the standard construction satisfy this property. --129.116.47.169 (talk) 00:25, 15 January 2010 (UTC)
 * I'd say that such a set is ordered by $$\in$$ hence well ordered because it is explicitly required by ZF; it's an ordinal. --pm a 09:20, 15 January 2010 (UTC)
 * No, it doesn't have to be an ordinal (nor do all ordinals qualify). For example V&omega; has the desired property, but 1 (understood as $$\{\emptyset\}$$) does not; its union is just $$\emptyset$$.  It would have to be a transitive set, but not all transitive sets work (e.g. 1 again).  I don't know of a specific name for the property. --Trovatore (talk) 09:26, 15 January 2010 (UTC)
 * ach I realized it and was just on the point of correcting it but you are too fast :-)  --pm a  09:31, 15 January 2010 (UTC) So we may rephrase the assumption on such a set X saying it's a transitive set without "maximal element" wrto the relation $$\in$$ (into commas because $$\in$$ needn't to be transitive as a relation on X; X has no maximal element wrto $$\in$$ in the sense that for any $$x \in X$$ there is $$y \in X$$ such that $$x\in y$$ )  One could also say X is a transitive set non-well-founded upwards. --pm a  09:38, 15 January 2010 (UTC)
 * ω + 1 is transitive and upwards non-well-founded, but it does not have the property. — Emil J. 18:52, 15 January 2010 (UTC)
 * Haha!! oops --pm a 21:42, 15 January 2010 (UTC)
 * Unless I'm missing something, none of the finite natural numbers under the standard convention have this property. Since n = {0, ..., n-1}, the union set of n is surely {0, ..., n-2}, or n-1. For example: 3 = {0, 1, 2} = {{}, {0}, {0, 1}}, and thus its union set is {0, 1} = 2. -- The Anome (talk) 17:07, 15 January 2010 (UTC)
 * The set of all natural numbers satisfies it. I think that is what the OP meant. --Tango (talk) 17:14, 15 January 2010 (UTC)
 * Ah. That would make sense. -- The Anome (talk) 17:24, 15 January 2010 (UTC)

Thanks for the responses, guys.

Tango is right. I should have made that clearer.

pm a 's description as a transitive set with no maximal element seems to work; from equating the two sets in my original characterization of X, you get that x is an element of X iff x is an element of an element of X. pm a 's first condition expresses the backward implication, and his second condition expresses the forward implication.

Am I correct in thinking that pm a was wrong when he said the set is required to be well-ordered under $$\in$$? Isn't V&omega; a counterexample? --128.62.37.240 (talk) 01:09, 16 January 2010 (UTC)
 * Yes, that was part of the first sentence (that the set was an ordinal) and wrong; however the relation $$\in$$ is well-founded (no infinite descending chains exist) according to the axiom of regularity of ZF (assuming you are working there). --pm a 08:40, 16 January 2010 (UTC)

Is a set that contains only itself hereditary?
In the process of copyediting hereditary set, I found myself writing the sentence


 * In non-well-founded set theories where such objects are allowed, a set that contains only itself is also a hereditary set.

It then occurred to me not only that this may or may not be true, but that it might not even be a meaningful statement. Consider the set E = {E}. By the definition of hereditary sets, if E is hereditary, then {E} is hereditary, which merely restates the initial premise. If E isn't hereditary, then E isn't hereditary, again restating the inital premise. I can't see how to get a better handle on this problem. Can anyone help? -- The Anome (talk) 14:59, 15 January 2010 (UTC)
 * The usual way to unambiguously phrase such definitions in non-well-founded set theories is to define that A is a hereditary xxx iff every object in the transitive closure of {A} is a xxx (note that this is equivalent to the inductive definition if the universe is well-founded). Your E is thus indeed a hereditary set. — Emil J. 15:09, 15 January 2010 (UTC)
 * Thanks! Could you, or anyone else with good set theory knowledge, please update the hereditary set article to reflect this? I'm afraid I'm outside my area of competence here. -- The Anome (talk) 15:12, 15 January 2010 (UTC)
 * OK, I've expanded it a bit. — Emil J. 16:05, 15 January 2010 (UTC)


 * Thanks! -- The Anome (talk) 16:55, 15 January 2010 (UTC)

Continuity of power functions
How to prove the continuity of all functions f: ℝ→ℝ, f(x)=|x|a, a>0, in x=0? --84.61.165.65 (talk) 15:30, 15 January 2010 (UTC)


 * Whenever you want to prove a function is continuous at a single point the best method is almost always to just apply the definition. Write down the definition, plug in this function and this point and see what happens. --Tango (talk) 17:10, 15 January 2010 (UTC)

Using monotonicity of both the power function and its inverse may help. Michael Hardy (talk) 16:01, 16 January 2010 (UTC)

Or just take a logarithm, if you have it in your toolbox. --pm a 23:10, 16 January 2010 (UTC)
 * Note that the question is about $$x=0$$. I doubt the right-continuity of log at 0 is in the OP's toolbox. -- Meni Rosenfeld (talk) 15:32, 17 January 2010 (UTC)


 * Who knows! It's just about $$\scriptstyle\inf_{x>0}\log x=-\infty,$$ not so esoteric. pm a 17:23, 17 January 2010 (UTC)
 * The thing is that applying the term "continuous" here requires moving from interpreting $$-\infty$$ as a piece of notation to including it as an actual element of the topological space under consideration, which is relatively advanced. -- Meni Rosenfeld (talk) 17:48, 17 January 2010 (UTC)
 * Well, this is going to be a theoretical debate about the content of the OP's head, since (s)he already evaporated. But saying that a sequence of positive numbers converges to zero if and only if the sequence of their logarithms converges to negative infinity (as a piece of notation, why not), is an elementary fact about limits, even more elementary than the continuity of |x|a in 0 for a>0, and as far as I know they usually teach it in basic calculus courses. But note that somehow I shared your doubt, since I put an "if" clause. So, if OP doesn't have that in the toolbox, it's a good opportunity to put it in. Actually, we should have started asking the OP what's his definition of ta, and what's his definition of continuity, in order to give a pertinent answer (Salomonic answer by Tango is good in any case, as usual). I added the log option for completeness, especially in case his definition of ta was ea log(t), which is reasonable and commonly adopted. pm a 22:44, 17 January 2010 (UTC)

Infinite ordinals
Would the infinite ordinal epsilon zero be expressed using hyperoperations as ω [4] ω or ω [ω] ω? 99.237.180.215 (talk) 17:48, 15 January 2010 (UTC)
 * The former, if I understand the notation correctly. — Emil J. 18:57, 15 January 2010 (UTC)
 * What would ω [ω] ω be? 99.237.180.215 (talk) 21:40, 15 January 2010 (UTC)
 * According to epsilon zero, $$\phi_\omega(0) = \omega \uparrow^\omega \omega$$. I'm not 100% sure about the notation, so I'm not sure that is exactly what you are asking for, but it is close. See Veblen function for an explanation of the phi notation. --Tango (talk) 22:54, 15 January 2010 (UTC)
 * Thanks. 76.67.74.33 (talk) 05:18, 16 January 2010 (UTC)

Properties of certain symmetric matricies?
Suppose I have a plain-old function $$f:\mathbb{N}\to\mathbb{R}$$ from the integers to the reals, and a plain-old matrix $$M$$ with matrix elts $$M_{ij}=f(ij)$$. Aside from the fact that M is symmetric, are there any other "well-known" properties or theorems that apply to this case? Anything in particular about the eigenvectors or eigenvalues of M? (No, the function f is not a multiplicative function, which would have made M into a outer product of two vectors.)  linas (talk) 23:25, 15 January 2010 (UTC)


 * Is the domain of f the integers or the natural numbers? You've said integers and used the symbol for natural numbers... --Tango (talk) 00:23, 16 January 2010 (UTC)
 * Presumably the naturals, unless M is a very odd matrix indeed. Algebraist 00:45, 16 January 2010 (UTC)
 * You mean you don't index your matrix rows starting from -1? --Tango (talk) 00:56, 16 January 2010 (UTC)
 * I'll take any theorems on any oddly numbered indexes you've got :-) In my case, f is tending to zero, not quite exponentially fast, and is oscillatory.99.153.64.179 (talk) 04:10, 16 January 2010 (UTC)


 * Something is not quite clear to me. Can you precise more in detail what information you have on the order of the square matrix and on the function f ? If you are talking of a square matrix An of a given order n, and if the only information on f is what you wrote, then we are just talking of any symmetric matrix where some entries coincide (for instance if n=6, it's a symmetric matrix where the coefficients may be whatever, with the only constraints a2,2=a1,4 and a2,3=a1,6). Of course if n is large the information become more relevant, but the fact that f vanishes at infinity never enters, unless what you want is an asymptotic information on the matrices An as n → ∞. Or are you thinking to an infinite matrix, that is a e.g. a linear operator on l2? --pm a  08:49, 16 January 2010 (UTC)


 * Not relevant to your problem since f vanishes at infinity and you're probably aware of this anyway, but if f is a polynomial then the determinant of the matrix will vanish if it has order 2 or more greater than the order of the polynomial. Dmcq (talk) 09:30, 16 January 2010 (UTC)


 * In case you are interested in the operator on $$l_2$$, it is relevant to know whether your $$f$$ is in $$l_2$$ (that is $$\textstyle \sum_{n=1}^\infty f(n)^2<\infty$$) which for your situation is equivalent to $$M(i,j):=f(ij)$$ being a Carleman kernel (hence the corresponding operator on $$l_2$$, let's call it $$T$$, is closed). --pm a  10:45, 16 January 2010 (UTC) I'm not sure if $$f\in l_2$$ also implies that the operator $$T$$ is bounded; it shouldn't be difficult to decide it (but first you have to decide if you really want to know it). If you have something better on $$f$$, namely $$\textstyle \sum_{n=1}^\infty f(n)^2n\log\log n<\infty$$, then $$T$$ is a Hilbert-Schmidt operator (thus compact, and its spectrum is ahttp://bits.wikimedia.org/skins-1.5/common/images/button_sig.pngn $$l_2$$ sequence of eigenvalues). In conclusion: personally I do not know of results where the above form of a symmetric kernel plays a role (as compared e.g. with $$M_{i,j}\,:=f(i-j)$$, that produces discrete convolution operators), but it is well possible that it has been introduced and studied. I can imagine that these matrices may appear quite naturally in some situation, e.g. from number theory or harmonic analysis (in fact I suppose they came out from some research problem of yours). And, of course, the special form of your matrices may give rise to some semplification when appling known results; however, this is too general to provide an answer here (I mean: if we were as powerful as Google, or if we had 1000 years and nothing else to do, we could produce as answer 1,000,000 results vaguely related with your matrices, possibly none satisfing your needs). So we really need you to be more specific both on the assumptions and on the conclusions of the results you are looking for.--pm a  17:18, 16 January 2010 (UTC)


 * Heh.The jig is up. Yes, in my case, M is an operator. I've a class of other similar operators (they are all transfer operators, although this is the only symmetric one I have). I can define it on l2 and also L2; I've reason to believe it's spectrum in L2 is continuous; I'm more interested in the l2 basis where it has a discrete spectrum. What I actually have is an operator G whose asymptotic expansion has M as its first term (as I mentioned above, the f is oscillatory and decreasing; I'm trying to nail down the details now). (My operator G is bounded, I presume my asymptotic term must be also.). A google search search for 'Carleman operator' indicates that it's a condition on the integral kernel, viz, that  when then integral kernel K(x,y) has $$\int |K(x,y)|^2 dy < \infty$$ then the operator is a Carleman operator. In my case, I don't believe that my kernel is square integrable in this way (the kernel for my G has dirac delta functions in it, I haven't thought about M). So I the Carleman condition is a side-track.


 * But this is all wandering off-topic -- I figured that the definition I gave above is 'generic' enough that perhaps something is know about the structure of these kinds of operators. Yes, my particular f is square-summable -- it has $$f(n)\sim e^{-c\sqrt {n})}$$. Also, fwiw, I suspect that this may be relevant but may sound far-fetched to you: note that if $$z=a+ib$$ then $$z^2=a^2-b^2+2iab$$ and I suspect that reason that my operator $$M_{ab}=f(ab)$$ has this product ab of the indexes is because there's a hidden z^2 elsewhere in the problem (my operators come from analytic number theory so these kinds of crazy weirdnesses abound). (I can tell you all about my operator G, if you really want to know, its the GKW operator, but that's beside the point ...)


 * As to having a million years to solve math problems ... well, perhaps in 30 years, we'll be able to attach our brain directly, neuronally, to a calculator having a few exaflops of cpu power...so we'll all be walking Ramanujan's ... exactly what might happen then, who knows. I suppose only rich mathematicians will be able to afford this :-( linas (talk) 18:05, 16 January 2010 (UTC)


 * Sorry if I wasn't clear: talking of the Carleman condition I referred to $$\N$$ as a measure space (with the counting measure), so that the kernel is a function $$ K:\N\times\N\to\C$$, which here has the form $$K(i,j):=f(ij);$$ the Carleman condition you mentioned in this case writes $$\sum_i |K(i,j)|^2 < \infty$$ for all $$j\in\N,$$, and this of course is true if $$f\in l^2$$. But that was just a hint, even because I didn't know much about your $$f$$. The other condition I wrote just ensures (due to the asymptotics on the divisor function d(n), of course) the much stronger $$\sum_{i,j} |K(i,j)|^2=\sum_{n=1}^\infty d(n)f(n)^2 < \infty,$$ that is, K is Hilbert-Schmidt operator. More generally, the operator should be in some p-trace class provided some summability on f holds; I guess $$f\in O\left(\exp(-c\sqrt{n})\right)$$ as you say should give you that with no pain, even for p=1 (so the operator is nuclear, which implies that the spectrum is even in $$l^1$$). --pm a  19:36, 16 January 2010 (UTC)


 * Yes, I was hoping for something more specific .. e.g. something analogous to the Hankel matrix or Toeplitz matrix (which becomes a Toeplitz operator) or perhaps some relation to some variant of a companion matrix or Vandermonde matrix or something along those lines. (roughly speaking, the GKW is the shift operator on the space of continued fractions so it should have all sorts of interesting multiplicative structure). linas (talk) 20:39, 16 January 2010 (UTC)


 * I see the analogy... actually the kernel $$f(i-j)$$ corresponds to a convolution with $$f$$, and there is a very nice algebraic structure around, that gives a lot of facilities. The kernel f(ij) may still enjoy some nice algebraic feature, but I can't see it. What I see is more on the side of bounds, e.g., that if $$f$$ is as you say $$\scriptstyle O\left(\exp(-c\sqrt{n})\right),$$ the operator is nuclear, even if you adopt suitable weighted $$l_2$$ spaces, and this gives you asymptotic estimates for the eigenvalues (I guess they go to 0 like $$f$$ or so). :-/ pm a  22:03, 16 January 2010 (UTC)