Wikipedia:Reference desk/Archives/Mathematics/2013 January 19

= January 19 =

Riemann hypothesis for curves over finite fields
Okay so I've been reading this document and I understand a lot of it but not all;

http://www.math.ucdavis.edu/~osserman/math/riemann-elliptic.pdf

Firstly, Is a q the same as the value of a in the Weierstrass equation or is it completely irrelevant of it? Secondly why does it equal q-N q ? And also where does the 2rootq come from? I assume it's something to do with q being the expected value, a q being the variance and N q ?


 * I think there is no connection between $$a_q$$ and $$a$$ in that paper (except, of course, that if one knows $$a$$, then one can in principle compute $$a_q$$). The $$2\sqrt{q}$$ I think is not motivated by any kind of probabilistic considerations, but rather because it is this estimate which is equivalent to the local Riemann hypothesis.   Sławomir Biały  (talk) 02:48, 19 January 2013 (UTC)


 * The idea is that you can count the number of points your curve has over the finite field $$ \mathbb{F}_q $$ by looking at the trace of the Frobenius endomorphism acting on a relevant vector space (how complicated a vector space depends on the curve... for an elliptic curve, you can get away looking at the Tate module), by a version of the Lefschetz fixed point theorem in this setting.
 * Forgetting for a second how you set it all up, you want to compute $$ a_q = \mathrm{tr}(F | V) $$, the trace of the Frobenius endomorphism F acting on some vector space V over a certain field of characteristic 0. But you might know that the trace of a linear map is just the sum of its eigenvalues.
 * Here's then how everything fits in: the polynomial $$P(q^{-s})$$ that appears in the formula $$ Z(s) = \frac{P(q^{-s})}{(1-q^{-s})(1-q^2 q^{-s})}$$ is the characteristic polynomial of F acting on V, $$ P(T) = \det(1 - F T) $$, and thus its roots are (up to taking inverses because of the different convention) the eigenvalues of F.
 * Now, the Riemann hypothesis for curves over finite fields precisely said that the roots of this polynomial were of the form q-s for s a complex number of real part 1/2.
 * This then means that the trace of F is the sum of 2 g numbers (where g is the genus of your curve, which is 1 in the case of an elliptic curve), each of absolute value $$ q^{1/2} $$. In particular, its own absolute value is less than or equal to $$ 2 g q^{1/2} = 2 g \sqrt{q} $$.
 * The final formula for the number of points of your curve over $$ \mathbb{F}_q $$ is then, according to the Lefschetz fixed point formula, $$ 1 - \mathrm{tr}(F | V) + q = 1 - a_q + q $$, and so this is within $$ 2 g \sqrt{q} $$ of $$ 1 + q $$. (The difference of +1 with the quoted article is the distinction between affine and projective, i.e. here I am counting an extra point at infinity which is the identity for the group law on the elliptic curve in the genus 1 case with the Weierstrass equations.)
 * You might also want to take a look at my answer to your previous question for clarifications.
 * Hope that helps. -SamTalk 08:53, 19 January 2013 (UTC)

I see. I think. So in the paper, the f(u) function with the roots u 1 and u 2 is derived from the frobenius endorphism?
 * Yes, that's it. It's the characteristic polynomial of the Frobenius endomorphism, except that article doesn't tell you the full story, as it doesn't tell you what vector space it's acting on or anything, it just tells you the characteristic polynomial from thin air. In fact, in the article, $$ f(T) = \det( F - T I) $$ is the characteristic polynomial of the Frobenius (acting on some mysterious two dimensional vector space, or 2 g dimensional in the general genus g case), whereas the numerator of the zeta function, which I'll keep calling $$ P(T) $$, is the characteristic polynomial $$ P(T) = \det( I - T F) $$, so that's just a slightly different convention. In any case, this means $$ a_q $$ is the trace of Frobenius, and q is the determinant of Frobenius.
 * If you care about what this vector space is, which this paper is hiding under the blanket, I'll just say that you turn out to have more than one choice. The most convenient would to have some nice explicit vector space over the rational numbers $$ \mathbb{Q} $$, but it turns out it isn't exactly so simple... in fact, what you get is a compatible system of vector spaces $$ V_\ell $$, one for each prime number $$ \ell $$. For $$ \ell \neq p $$ (where p is the characteristic of your original field $$ \mathbb{F}_q$$), this is the Tate module of your elliptic curve, $$ V_\ell = T_\ell \otimes_{\mathbb{Z}_\ell} {\mathbb{Q}_\ell} $$, where $$ T_\ell = \varprojlim_n E[\ell^n] $$ is the inverse limit of the $ \ell^n$-torsion subgroups of your elliptic curve E over the algebraic closure $$ \overline{\mathbb{F}_q} $$. In the general genus g case, you can take the Tate module of the Jacobian variety of your curve, but the way that naturally generalises to other varieties besides curves (surfaces, etc) is to take the $\ell$-adic étale cohomology of your variety. For $$ \ell = p $$, things are even more complicated, as showcased by the complexities of p-adic Hodge theory, and you have to substitute étale cohomology for crystalline cohomology.
 * I know that was a bit technical, but the thing to remember here is the case of elliptic curves, where basically you build up a vector space from the torsion subgroups of your elliptic curve: your elliptic curve has a group structure, and you look at points x on the curve such that $$ \ell^n x = \underbrace{x + x + \cdots + x}_{\ell^n \text{ times}} = 0$$ (these are the $$ \ell^n$$-torsion points). This is the subgroup $$ E[\ell^n] $$, and if you know about elliptic curves over the complex numbers just being complex tori you'll know that $$ E[\ell^n] $$ (over the complex numbers!) is isomorphic to $$\mathbb{Z}/\ell^n \mathbb{Z} \times \mathbb{Z}/\ell^n \mathbb{Z}$$. This is true if you look at all points over the complex numbers $$ \mathbb{C} $$, but is in fact also true over the algebraic closure $$ \overline{\mathbb{F}_q} $$, as long as $$ \ell \neq p $$. If $$ \ell = p$$, it's more complicated to do the right thing.
 * The picture to have in your head is that you build an elliptic curve like a topological torus, by gluing opposite edges of a rectangle... except to take into account the complex structure you have to do this while preserving angles, so it matters whether you use a rectangle or instead a tilted parallelogram... and you get this picture (sorry, I just made it in Inkscape): you're identifying opposite sides by a translation of the complex plane, and the group law is addition of points in the complex plane except you translate back within this fundamental parallelogram, imagining translate copies of this parallelogram tiling the plane. So the points depicted are exactly the points which, when you add them to themselves 4 times, you get the origin (any vertex of the parallelogram, they're all identified after gluing). You can see you have 16 points (the ones on each edge are identified with the ones on the opposite edge), and in fact a copy of the abelian group $$ \mathbb{Z}/4 \mathbb{Z} \times \mathbb{Z}/4 \mathbb{Z}$$: you are essentially labelling points by pairs $$ (0,0), (1,0), (0,1), \ldots, (3,3) $$ and counting modulo 4 in each factor; and the label $$ (a,b) $$ corresponds to the point with complex coordinates $$ a/4 + \tau b / 4$$, where $$ \tau $$ is the complex coordinate of the top-left vertex of the parallelogram.
 * So anyway, you put together all these torsion subgroups and bundle them up, by doing a kind of decimal expansion (in fact a p-adic expansion!), and you get $$ \mathbb{Z}_\ell \times \mathbb{Z}_\ell$$, in the same way as you can build out the $$\ell$$-adic integers by doing $$ \mathbb{Z} / \ell \mathbb{Z}, \mathbb{Z} / \ell^2 \mathbb{Z}, \ldots $$ (you are taking an inverse limit). Then all of these subgroups have an action of the Frobenius endomorphism on it, because remember these were only defined over the algebraic closure $$ \overline{\mathbb{F}_q} $$, and the fundamental structure of finite fields (and their Galois groups) means that this is permuted around by Frobenius (which is on coordinates just the map $$ x \mapsto x^q $$), and the stuff that isn't permuted around is precisely the finite field you started with $$ \mathbb{F}_q $$ (essentially because of Fermat's little theorem). So in the end, you obtain $$ \mathbb{Z}_\ell \times \mathbb{Z}_\ell$$ with an action of the Frobenius endomorphism F, and by allowing fractions you get the required vector space which is $$ \mathbb{Q}_\ell \times \mathbb{Q}_\ell$$. I've played fast and loose by not distinguishing between the picture of the complex numbers (where you have this lattice, and tiling of the complex plane) and the picture over the finite field, but everything goes through as long as $$ \ell \neq p $$.
 * That takes you through the construction of these vector spaces. It might take a while to digest, but if you think about it enough you see it kind of is the only choice you have! You bundle up these lattice points like that and look at how Frobenius acts on them.
 * Sorry if that was a bit too much! -SamTalk 08:36, 20 January 2013 (UTC)

prime ideal extension and contraction
f:A->B a ring homomorphism. If I have an ideal p of A which is prime. And I extend it and get an ideal p^e of B. If p^e is also prime, does it follow that p^ec=p? --helohe (talk)  04:00, 19 January 2013 (UTC)


 * I don't think so because the identity implies that every prime ideal of A comes from prime ideals of B, which is not necessarily the case. More precisely,
 * $$\mathfrak{p}$$ is in the image of $$\operatorname{Spec}B \to \operatorname{Spec}A, \mathfrak{p} \to \mathfrak{p}^c$$ if and only if $$\mathfrak{p}^{ec} = \mathfrak{p}$$.
 * (note to myself, cite this somewhere in Wikipedia.)
 * -- Taku (talk) 17:25, 19 January 2013 (UTC)


 * Thank you. There is something else in Atiyah–MacDonald, that just confused me. The words onto and into seem not to be used consistently. For example many times onto means surjective. But for example in the definition of ring homomorphism, into is used even though it is not meant to be injective. --helohe (talk)  22:13, 20 January 2013 (UTC)
 * I don't think "into" necessarily imply injective; on the other hand, "onto" usually implies "surjective". Usually, the context makes it clear whit is meant. -- Taku (talk) 13:51, 21 January 2013 (UTC)

Running time of APR-CL
In a book, I found the APR-CL "has running time n raised to the power log log n." Does it mean n^(log(log n))?

Later, this book says log log 10^100 is about 230, but actually log 10^100 is around 230, not log log 10^100.

Do you think the writer or editor is confused?

I am not a mathematical expert but a translator. Please help in plain English. --Analphil (talk) 08:15, 19 January 2013 (UTC)


 * Yes. Yes. Double sharp (talk) 10:02, 19 January 2013 (UTC)

Estimating a categorical distribution
Consider the set $$S$$ of K-dimensional vectors whose entries are all elements of the set $$\{1,2,3,...,N\}$$. Show that $$\sum_{\vec{\alpha} \in S} \prod_{k=1}^N \left(\frac{}{K}\right)^{} = O(K^\frac{N-1}{2})$$, where $$$$ is the number of times $$k \in \{1,2,3,...,N\}$$ appears in $$\vec{\alpha}$$. --AnalysisAlgebra (talk) 19:20, 19 January 2013 (UTC)
 * just to check, this together with your follow on question below isn't your homework is it?  nonsense  ferret  12:52, 20 January 2013 (UTC)
 * No, they are not. --AnalysisAlgebra (talk) 13:58, 20 January 2013 (UTC)
 * Note, please, left and right angle brackets are distinct from less-than and greater-than operators. The are denoted with LaTeX commands  and  , respectively, and they render as $$\langle k,\vec{\alpha}\rangle$$ rather than $$$$ --CiaPan (talk) 10:59, 21 January 2013 (UTC)
 * Although personally I think the acute angle looks better. The problem with the less-than and greater-than signs is not so much the glyph itself as the spacing.  Possibly there's a happy medium somewhere in between, but I really don't like $$\langle$$ and $$\rangle$$ much. --Trovatore (talk) 11:04, 21 January 2013 (UTC)

Geometric figure Extension.
(I'm not really sure what to call this, if anyone has a better title, feel free to alter).

Let S be a closed subset of the plane. Define S' as following. S' = the set of all c where a,b are in S and Distance ab = Distance bc and abc make a straight line. I'm looking at this generally for the filled in Polygons. If S is an *even* sided regular polygon then S' is exactly 9 times the area of S. In fact with a four sided polygon, the ratio is 9 as far as I can tell even in the cases of Rectangles or Parallelograms (this isn't affected by stretching or skewing). It is also a ratio of 9 for a circle. However for S as one the *odd* (2n+1) sided regular polygons with side d, S' turns out to be the polygon with twice the number of sides (4n+2) alternating between sides d and 2d (with all angles the same). If S is a regular (equilateral triangle) then S' has an area 13 times that of S (can be diagrammed on a triangular grid. As the number of odd sides gets bigger it gets close to the circle and as such gets closer to 9.

Note, if T is just the border of S then T' = S' without S. (so if S is a square of side length d, then T' is a 3d by 3d square missing the center square.

So I'm looking for three things.


 * The first is an equation of the ratio of a regular polygon of 2n+1 sides of length d to a polygon of 4n+2 sides alternating between those of length d and those of length 2d. (this would have value 13 for n=1 and go to 9 as n increases.


 * The second is generally whether things get odder for things that aren't polygons.

Naraht (talk) 21:18, 19 January 2013 (UTC)
 * The third is what branch of Mathematics does this fall in, some branch of Geometry, Topology or something else?


 * Yes, things get more complicated for less regular figures.
 * Take an L-figure, a concave shape obtained by removing a 1/4 part of a 1-by-1 square from its corner, say top-right corner. A result of extending you described would be a 3-by-3 square with two smaller squares subtracted: 1-by-1 from top-right corner and half-by-half from bottom-left. The resulting figure's area is 10 $1/3$ times that of the initial figure.
 * For a disk (i.e. an area anclosed by a circle) the result is a disk with radius enlarged 3 times, so the area grows 9 times.
 * However for a circle with radius R (i.e. a closed curve itself) the resulting figure is an annulus with radii R and 3R. The area ratio here is infinite, as the starting figure has area equal zero.
 * CiaPan (talk) 08:45, 21 January 2013 (UTC)
 * I also found a 10 $1/3$ ration for removing a triangle from a domino (so it is made of three 45-45-90 triangles). I wonder what the shape is for two squares touching at a corner...


 * Yes, in that regards, a disk is sort of an infinite polygon.


 * Annulus was what I meant with the "T is just the border of S..." above.Naraht (talk) 13:12, 21 January 2013 (UTC)


 * 'I wonder what the shape is for two squares touching at a corner...' Let them be 1-by-1 squares, placed in Cartesian coordinates so that they touch in (0, 0) and they lie in 1st and 3rd quadrants of the coordinate system. Then the extension of the figure is a union of all unit squares which all vertices have both coordinates integer and satisfying: |x| ≤ 3 |y| ≤ 3 |x – y| ≤ 3 The extended figure consists of 24 such unit squares. --CiaPan (talk) 13:37, 21 January 2013 (UTC)

Independence number of graph: bound similar to Lovasz number?
Hello,

I was reading on the independence number of a graph, and I noticed that in some works, the following upper bound is obtained using linear algebra:

If A is any real symmetric, indexed by the vertices of the graph, and c a positive scalar such that: $$A_{i j}=0$$ if $$v_i$$ is not adjacent to $$v_j$$, and $$A+I - J/c$$ is positive semidefinite ($$J$$ is the all-one matrix) then c is an upper bound on the independence number of the graph.

The proof is very short, and relies on considering $$\chi^T (A+ I-J/C)\chi$$ for an appropriate (0,1)-vector, based on an independent set.

Is there a name for this bound? It reminds me of the Lovász number, where one reads:

Let B range over all n &times; n symmetric positive semidefinite matrices such that bij = 0 for every ij ∈ E and Tr(B) = 1. Here Tr denotes trace (the sum of diagonal entries) and J is the n &times; n matrix of ones. Then

\vartheta(G) = \max_B \operatorname{Tr}(BJ). $$ Note that Tr(BJ) is just the sum of all entries of B.

However, I do not know for sure if these bounds are equally strong. Where can I get more information on this? Many thanks, Evilbu (talk) 22:09, 19 January 2013 (UTC)