Talk:Bilinear form

quadratic form associated to the bilinear form
there should be either here or at quadratic form a statement about when a bilinear form coincides with the bilinear form associated to the quadratic form associated to the bilinear form, i.e.


 * B => Q(x) = B(x,x) => B'(x,y) = 1/2( Q( x+y )-Q(x)-Q(y) ) ?=? B(x,y)

and/or the analogue in the complex case with the longer 1/4 (Q(x+iy) +/- Q(x-iy) ...) formula. &mdash; MFH: Talk 22:11, 21 June 2005 (UTC)

reflexivity and skew symmetric versus alternating
Following the 'be bold' policy I just went through with it and changed the article. While I think everything was correct I found it to be confusing. First of all there was no mention of reflexivity, also the section was called symmetry while it did not only discuss symmetric forms. When I changed the title I wanted to emphasize the reason for discussing these two kinds : alternating and symmetric. However, I can imagine some people disagree with me now. Evilbu 19:16, 10 February 2006 (UTC)

removed symmetric bilinear form redirect
I understand why there was a redirect yet i still removed it. Certain things like orthogonal polarities, orthogonal basisses (sylvesters's inertia law) had to be explained thoroughly in a new article I think. However I agree that I am explaining matrix representations there (which applies to all bilinear forms) and the definition of $$W^{\perp}$$, which applies to all reflexive forms. All comments are welcome. — Preceding unsigned comment added by Evilbu (talk • contribs) 13:49, 14 February 2006 (UTC)

removed comment about A being symmetric
There was a comment in the first subsection stating A is symmetric due to the symmetry of the bilinear form. However, this is confusing, as you can certainly have a non-symmetric bilinear form (hence a non-symmetric matrix A). AlyoshaK 13:03, 22 September 2006 (UTC)

Why "Form"
What is the history of the use of the word "form" for this topic? It seems somewhat arbitrary. —Ben FrantzDale 14:42, 26 October 2006 (UTC)

Why finite dimensional
Is a bilinear form only on finite dimensional vector spaces defined? I don't think so! So this should be mention somewhere! —Preceding unsigned comment added by 132.230.30.96 (talk) 13:06, 7 January 2010 (UTC)

No need for the matrix to be square
if x is in R^n and y is in R^m

(x^t)_1,n*(B_n,m)*y_m,1 where_{p,q} gives the dimensions of the arrays above.

Why are we restricting the arguments of B(*,*) to be in the same vector space, i.e. R^d (and R^d only)? — Preceding unsigned comment added by 99.149.190.128 (talk) 23:39, 26 April 2012 (UTC)


 * That would be a bilinear map. A bilinear form is a bilinear map to a field for which the arguments belong to the same space and therefore have the same dimensionality. Schomerus (talk) 04:30, 14 June 2012 (UTC)


 * It is a special case of a bilinear map in which the codomain of the mapping is the underlying field. This raises the question of whether there is a name for this special case, which is what all matrices would belong to.  — Quondum☏ 07:50, 14 June 2012 (UTC)

Reviewing this question, it seems to me that the bilinear forms on two spaces of different dimension are not adequately addressed under §Different spaces. They are presumably often ignored since several of the concepts (isomorphism, symmetric/alternating/skew-symmetric, associated quadratic form) don't apply, yet may others (orthogonality, maps to the other space's dual, radicals) do apply. This strikes me as a missing area in the article surely this must be covered in the literature? —Quondum 14:28, 13 April 2014 (UTC)
 * Deltahedron (talk) 18:10, 13 April 2014 (UTC)

Decomposition and direct sum (product?) of vector spaces with bilinear forms
It strikes me that what should be an important section of this article is missing, which could be called Orthogonal direct sum and decomposition. If one defines the product of two vector spaces U, V over the same field F, respectively with a bilinear forms BU and BV as the direct sum of vector spaces W = U ⊕ V with bilinear form BW with the property that for all elements within the respective domains, we have BW(u1 + v1, u2 + v2) = BU(u1, u2) + BV(v1, v2). In particular, one should be able to decompose every (reflexive?) vector space (with the possible exception of over a field of char 2) with a bilinear form into an orthogonal direct sum of 1-d degenerate, 1-d nondegenerate and 2-d symplectic factor spaces. Artin and Kaplansky seem to cover this, though I might be a little confused about it. Are there people with familiarity of this? —Quondum 02:13, 31 August 2014 (UTC)

Generalization to modules over non-commutative rings
The concept of a bilinear form (and indeed of a bilinear map) generalizes naturally to non-commutative rings. Why the restriction here? All that is needed is to track the left- and rightness as one would expect with such a generalization. —Quondum 23:47, 25 April 2015 (UTC)


 * Bourbaki (Algebra I) states rather tantalizingly on p. 233 that "(the notion of bilinear form will be defined generally in IX, § 1)". Does someone have this, to add this definition to the article?  —Quondum 05:22, 6 September 2015 (UTC)