Talk:Elementary symmetric polynomial

Case of no variables
So e0 = 1 when the number of variables is at least 1. What if the number of variables is 0? Is e0 equal to 0 in that case? Michael Hardy (talk) 23:26, 7 September 2009 (UTC)


 * Well, if you define the elementary symmetric polynomials by

e_k(X_1,\ldots,X_n)=\sum_{P\subseteq \{1,\ldots,n\}:|P| = k}\left(\prod_{i\in P}X_i\right), $$
 * then when n and k are 0, we have a sum over the subsets of the empty set, and the only term in the sum is a product over the empty set, which is by convention 1.

Forgetfulfunctor (talk) 00:35, 8 September 2009 (UTC)

Strict inequality
Why is $$e_2 (X_1, X_2, \dots,X_n) = \textstyle\sum_{1 \leq j < k \leq n} X_j X_k$$ instead of $$e_2 (X_1, X_2, \dots,X_n) = \textstyle\sum_{1 \leq j \leq k \leq n} X_j X_k$$?KlappCK (talk) 14:35, 13 May 2011 (UTC)


 * Because per definition elementary symmetric polynomials are sums of products of distinct variables; or equivalently sums of square-free monomials. If you do want all monomials ofa given degree as terms, you get the complete homogeneous symmetric polynomials instead, which are quite interesting as well. Marc van Leeuwen (talk) 04:55, 14 May 2011 (UTC)


 * Marc van Leeuwen, thanks for the insight. I think it would be beneficial to (in some way) emphasize this fact in the header, as I apparently overlooked that distinction the first time through.KlappCK (talk) 17:03, 16 May 2011 (UTC)

Rephrasing needed in A Self-Contained Algorithmic Proof
Hi,

the following sentence is very difficult to understand:

"Clearly Q is a polynomial in the symmetric polynomials. Moreover, $$x_n\,\!$$ occurs with exponent $$i_n$$, since it occurs with exponent $$i_n-i_{n-1}\,\!$$ in the first term, $$i_{n-1}-i_{n-2}\,\!$$ in the second term, and so on, down to $$i_1\,\!$$ times in the final term. Moreover, $$x_n^{i_n}\,\!$$ only occurs in a single monomial in Q, and in that monomial, $$x_{n-1}\,\!$$ occurs with exponent $$i_{n-1}\,\!$$, since it does not include a term from $$s_1\,\!$$, and it occurs in the rest of Q with exponent $$i_{n-1}-i_{n-2}\,\!$$ in the second term, $$i_{n-2}-i_{n-3}\,\!$$ in the third term, and so on, down to $$i_1\,\!$$ times in the final term. And so on for the remaining variables. This shows that $$P-Q\,\!$$ has monomials that are smaller than the one just eliminated. We can now continue the process until nothing remains in P."

I believe that "it" here refers to $$x_{n-1}\,\!$$, but even so it is difficult to understand what's meant. What are the terms here? Wisapi (talk) 16:22, 26 May 2011 (UTC)


 * I think I have made this proof more readable now. -- Darij (talk) 01:27, 29 October 2011 (UTC)

Question on First Proof
I'm feeling stupid for asking this since I have supposedly proofread all of this page a year or so ago. But why do we know in the First Proof that "$$R(X_1, \ldots, X_{n})$$ is a symmetric polynomial in X1, ..., Xn, of the same degree as $$ P_{\mbox{lacunary}}$$"? I mean the "same degree" part. Theoretically, couldn't it happen that $$ \tilde{Q}$$ is a polynomial of huge degree, but all of its high-degree terms cancel out when it is applied to $$(\sigma_{1,n-1}, \ldots, \sigma_{n-1,n-1})$$, whereas applying it to $$(\sigma_{1,n}, \ldots, \sigma_{n-1,n})$$ does not cancel them out? This is of course made impossible by the algebraic independency of the elementary symmetric polynomials, but that's not supposed to be known in this proof. -- Darij (talk) 01:30, 28 March 2013 (UTC)