User talk:Matrixbud/GA Mathematica package discussion

This page is intended as a working area for a discussion on a specific subject, perhaps freer of the usual talk page guidelines. The user in whose space this is has general discretion on what goes.

Package location
The package is the Geometric-Algebra package at github.

Terminology
In GA, terminology has been sloppily invented. Some issues include re-use of terms for different albeit similar meanings to their meanings in mathematics generally. It would be worth deciding which direction to favour (Hestenes's GA terminology, or more established mathematical terminology). I (Q) favour the latter, since the former implies an approach of "going it alone": it is more difficult to reference the broader mathematical literature, including a large body of more formal work on Clifford algebras. Following the use of any specific author in geometric algebra is bound to be in conlict with someone else. Here specific terms are listed with associated issues.


 * Grassmann algebra – Meaning varies by author. An unambiguous term is exterior algebra.
 * wedge product – The mathematical term is exterior product.
 * dot product – This is so variously defined that it is best avoided, except perhaps between two vectors.
 * geometric product and Clifford product are synonyms; one should be chosen (the latter being more common in more "formal" Clifford algebra texts).
 * k-vector – This is a big one. In GA, this invariably means what you would call a slice of degree k.  It is very rare for this term to be used to mean "a vector in a space of dimension k".


 * I will now reference both terms (Grassmann-Exterior, wedge-exterior, geometric-Clifford) in the source, examples, palette tooltips, and documentation. I personally prefer k-vector to mean a vector in k-space but if you say that is the minority preferences then I will try to avoid the term. I think it might be best if everyone would avoid the term k-vector since it has the two possible meanings. For this package, I will always say multi vector or k-multivector and that should be clear. I have a function that gives a standard vector in n-space. I currently call the function nVectorG but maybe I'll just call it VectorG. matrixbud (talk) 00:38, 20 September 2019 (UTC)


 * I eliminated all reference to Grassmann/Exterior algebra. What I had was really just a holding place for developing it, and I don't think that will happen anytime soon. I have now renamed many of my functions to improve clarity. I also mention both names, wedge product and exterior product,  geometric product and Clifford product, in several places so that no user should get confused. This includes the tooltips and the description if someone types, say, ?GeomPrdtG. I haven't yet uploaded because I have more changes underway. I also clarify the meaning of my dot product in several places. I use it sometimes so I don't want to get rid of it. I usually prefer it to the contractions so I suspect that is a matter of personal preference.


 * matrixbud (talk) 15:12, 11 October 2019 (UTC)

Notation
Version: Oct2019
 * Juxtaposition (space as the geometric product symbol) is dangerous at best, since Mathematica automatically re-orders factors in a way that is not consistent with the non-commutative geometric product.

Basis
The entire package appears (from the documentation) to be based on a basis. This is not very useful for more general symbolic manipulation: everything must be expressed in the form of basis vectors for anything to work (if I understand this correctly). In contrast, in a more general context with symbols, the basis vectors are simply symbols to which automatic simplification can be applied, e.g. where a product of adjacent basis vectors occurs in an expression, they can be arranged in canonical order by using the known bilinear form. The basis also does not need to be orthogonal or normalized in this treatment.

User understanding
A proofreading is needed for the package documentation. There are some confusing aspects, for example:
 * Documentation:
 * There is no clear interpretation of expressions.
 * It is unclear how the 'type' of a symbol is inferred. How are a real number, a vector, a basis vector, or a general multivector distinguished?
 * Both juxtaposition and ⋄ are used, with a comment that indicates that they are not the same. The latter is clearly the geometric product symbol, and juxtaposition is clearly used for scalar multiplication, but juxtaposition of two basis vectors is confusing.
 * There seems to be a general presumption of understanding of the terminology without defining it precisely. This is exacerbated by it being nonstandard terminology in the GA community or anywhere else.
 * Installation:
 * I got as far as opening the palette (as per the installation manual, but inferring the name, not given correctly), but the instructions do not match what I found. I could not find the "Begin Package", and did not need to expand anything.  I ran the first cell and the palette appeared, but I could not find the menu item to install it.
 * Initialization cells could be used: they run automatically when the notebook is opened. The palette could possibly be a package (loaded using Needs or Get), rather than a notebook.
 * The palette has a signature type and number of time dimensions. These are confusing.  The static string "# of time dimensions: 0 (space), 1 (spacetime)" is confusing: it is unclear what is intended by it.

Comments on Denker
Denker has some internal errors and inconsistencies to be aware of.


 * Equation 4b is incorrect. It holds only where P and Q have grade=1.
 * Equation 6b is incorrect. It holds only where P and Q have grade=1.
 * The sequence in statement 16 is incorrect: "vector, bivector, trivector, ⋯, blade (RIGHT[sic])". Aside from "blade", these are homogeneous clifs.  This abuse of the term "blade" to mean a general homogeneous clif extends to other places.  This is particularly odd, since Denker defines a blade to be "any scalar, any vector, or the wedge product of any number of vectors", which is correct.
 * Denker uses k-vector to mean a vector in a k-dimensional space, without acknowledging its standard use to mean a homogeneous clif of grade k. Occasional authors use the term k-multivector.

Discussion
I intend for anyone to edit the above freely, and to remove comments here as they become stale. Removed material can always be retrieved from the page history (at least in principle: actually finding it can be difficult). My comments are simply a sampling: I have not actually run anything, and have only skimmed the documentation. —Quondum 01:14, 17 September 2019 (UTC)


 * Quodum, thank you. I plan to implement most of your suggestions. And, yes, I built this package on a foundation of an orthonormal basis. It does not perform symbolic manipulation except that it does allow undefined symbols (like a1e2) so it is possible to explore and even discover GA relationships. See the very end of my file 'Examples' for some investigation of GA identities. In fact I used these symbolic equations to discover which explict Hodge dual equation matched the implicit one on your web page. Armed with that I was able to write down a formal proof that the two equations are "tied". matrixbud (talk) 00:06, 19 September 2019 (UTC)


 * Oh. I have developed a Clifford (?) dual function that does the following. It takes terms like e2e3e5 and maps them to e1e4e6, and thus it also maps e1e4e6 to e2e3e5. Then I extended the function using linearity to map any (not necessarily homogeneous) multi vector. I presume such a function would be a Clifford dual rather than Hodge dual? matrixbud (talk) 00:24, 19 September 2019 (UTC)


 * My other operations, including dot product, similarly work with any two multivectors, taking the dot product (or any other product) term-by-term. I will check my documentation to see if I need to clarify my definition of "dot" or whether I already have, but basically I take the GA product and look for terms of grade j - k where j and k are the respective grades of two terms. I'll have to review what I do when j - k is negative but I presume I followed Hestenes. I read what you say above about "dot" and I'll take some appropriate action. It didn't occur to me that the definition wasn't well-established. matrixbud (talk) 00:24, 19 September 2019 (UTC)


 * For Dot and scalar product and the contractions, I followed Denker "Introduction to Clifford Algebra", and I used Abs(j - k), not j - k as I said above. I did not stick to Denker's definition that the multivectors must be homogeneous since I extended multi-linearly. Is there a reason not to generalize products to include non-homogeneous multivectors? I don't expect I would myself do many such calculations but you never know when there might be an ocassion to investigate some property that could arise in this fashion. The extension does not affect the calculation of homogeneous multivectors. matrixbud (talk) 18:19, 19 September 2019 (UTC)


 * There is no reason to avoid extending any of the products multi-linearly if they are suitably defined on all grades, and it is standard to do so. The question is to rather which products are useful.  The abs(r − s) is quite common, and is what Dorst labeled the "fat dot product", but it is not algebraically useful.  AFAICT, it is used when reasoning about things grade-by-grade, never when using arbitrary multivectors (precisely because it is poorly behaved in the general case).  In every instance substituting the better-behaved left or right contraction will be equivalent on the grades of interest, and will more likely be true for arbitrary multivectors.  Thus, one can implement it for coverage of operators, but I tend to pull my nose up at any uses of it – especially when the author of a text uses it.  —Quondum 20:14, 19 September 2019 (UTC)


 * I'll be traveling for a while so it may be a week or more before I can tell you what I have changed and I'll likely ask for clarification on a few items. Meanwhile if additional things occur to you, please don't hesitate to document them here. matrixbud (talk) 00:06, 19 September 2019 (UTC)


 * Most of the useful operators are (multi)linear, including all the products, the dual operators (Clifford and Hodge), the grade involutions (like reversion), etc., so you cannot use this to distinguish much. I think it will be more helpful if I find counterexamples where I disagree; this will be more illuminating.  I'll find holes in your Mathematica implementations.  For example your   and several others .  —Quondum 01:54, 19 September 2019 (UTC)


 * Inverse, of course, does not always exist. Reviewing just now what I wrote 2 years ago when I started this, I see that I put caviats to that effect in various places, and even in the output when it was obvious. But I have just added additional warnings during every output to check and confirm. When I wrote this function I had the choice of having nothing or something. Many times it is difficult to compute an inverse by hand when it actually exists. I wanted something that would work at least some of the time. I will look at Norm and Gorm to see if I need more caviats there, too. I won't update my web site until I have made many or most of the changes your recommend. matrixbud (talk) 17:49, 19 September 2019 (UTC)


 * I developed a relatively efficient general inverse computation for multivectors up to grade 5 (http://www.euclideanspace.com/maths/algebra/clifford/algebra/functions/inverse/manfred.htm). Here is the key information:
 * Notation
 * n: The dimension of the generating vector space.
 * A: The general multivector to be inverted.
 * ⟨X⟩: The scalar (grade 0) component of X (in this context all other components are zero anyway; if not, there is some problem ).
 * *X: Sign changes in X by grade are +−−++− (Clifford conjugation).
 * ~X: Sign changes in X by grade are ++−−++ (Reversal conjugation).
 * $X: Sign changes in X by grade are ++−+−−−+− (+− for some grades immaterial, applied only to zeros). [ choice of signs to be corrected ]
 * %X: Sign changes in X by grade are +−+−+−−+ . [ choice of signs to be corrected ]
 * #X: Sign changes in X by grade are ++−+−+−+−− . [ choice of signs to be corrected ]
 * Defining identities
 * d = ⟨A fn(A)⟩ and A−1 = fn(A)/d
 * f0(A) = 1
 * f1(A) = *A
 * f2(A) = *A
 * f3(A) = *A $(A *A)
 * f4(A) = *A $(A A*) or f4(A) = ~A %(A ~A)
 * f5(A) = ~A %(A ~A) #(A ~A %(~A A))
 * —Quondum 18:30, 19 September 2019 (UTC)


 * Quondum, you asked if maybe you had made an algebra mistake in one of your formulas. Maybe I've made a coding mistake using my package? Would you check the following (modified)?


 * A = 2 + e2 - e1e2e3
 * *A = 2 - e2 - e1e2e3
 * A *A = 2 - 4e1e2e3
 * $(A *A) = 2 - 4e1e2e3
 * f3(A) = *A $(A *A) = -2 e2 - 4e1e3 - 10e1e2e3
 * d = -12
 * A-1 = (1/26) (8 + 7e2 - 2e1e3 + 3e1e2e3)
 * Check: A-1 A = 1 + (12/13)e2
 * My package: A-1 = (2/5) - (1/10)e2 + (1/5)e1e3 + (3/10)e1e2e3
 * Check: A-1 A = 1
 * matrixbud (talk) 16:05, 14 October 2019 (UTC)
 * matrixbud (talk) 16:05, 14 October 2019 (UTC)


 * I also have simple examples of possible problems for f4 and f5. However... all of your formulas work for many situations. All of them are better than my initial simple formula. Thank you for flagging me on that. Since the n equations in n unknowns approach computes as fast as your sign changes, even in dimensions 5 and 6, I think I will stay with that approach since it seems never to give the wrong answer. matrixbud (talk) 00:19, 16 October 2019 (UTC)


 * My InverseG function now solves the 2n equations in 2n unknowns, so this version works for any multivector of any dimension, with caveats... If there is no solution, it prints out a message saying so. It works in under a second up to dimension 5 if you use numerical coefficients, then slows down rapidly. Probably useful to dim 7 or 8. Using symbolic coefficients it is similarly fast unless one enters a generic dim n multivector. In that case it is good for n ≤ 3. It solves the generic inverse for n = 4 in a few hours but the solution is several pages of mathematica output -- very complex formula that Mathematica can only barely simply. I haven't yet tried but it may be able to develop symbolic formulas in dimensions 5-8 for special cases like vectors, bivectors, rotors, etc. So it should support higher dimensional rotations. matrixbud (talk) 19:11, 11 October 2019 (UTC)
 * Seems good – a general approach. Though when the multivector is known to be a versor (e.g. a rotation), then the inverse is very simple: R$−1$ = ~R / (R ~R) in any number of dimensions.  —Quondum 13:06, 12 October 2019 (UTC)


 * I have used my package to discover a single general formula for inverse that works in ANY DIMENSION for vectors, bivectors, scalar + vector, scalar + bivector. I have confirmed the formula for dimensions ≤7 using all four combinations of space/spacetime and metric (+++/---). For 3-blades it almost works but will require another -1 correction somewhere. Strangely, it doesn't at first glance work at all for 3-slices (sums of 3-blades).


 * Notation: Maxgrade is is highest grade represented in the multi vector. ConstantG is the grade 0 term, if any, FreeTerm is any terms with grades > 0, and the Boole/Odd factor is just 1 if maxgrade is odd, 0 if maxgrade is even. (So, the Boole/Odd factor in the denominator is a correction for vectors. It is not needed by bivectors.)


 * The formula is


 * MaxGradeG[clif_] := Max[GradeListG[clif]]


 * InvG[clif_] := (-1)^MaxGradeG[clif] (ConstantG[clif] - (FreeTermG[clif]))/(GormG[clif] - 2 Boole[OddQ[MaxGradeG[clif]]] ConstantG[clif]^2)


 * If you decide to check mine, I'm always happy to hear whether you confirm or find aan exception or mistake. Between us we cover all multivectors up to grade 5, all vectors and bivectors of any dimension, and all scalar + vector and scalar + bisector of any dimension. The latter could be useful for rotations in highers dimensions as well as boosts (although you already cover boosts). I haven't upgraded my package yet to include this stuff. I presume you are giving me permission to include your formula?


 * Oh. I also have developed a Mathematica function that solves for the inverse of any multi vector in any dimension. It doesn't have a formula. Rather it uses Mathematica's "solve" function to solve 22 equations in 2n unknowns and come up with the inverse. The trouble is that it is slow. Good for n ≤ 3, ok for n = 4, very very slow for n = 5.


 * matrixbud (talk) 18:52, 1 October 2019 (UTC)


 * I have found a way to avoid having to solve 2n equations in 2n unknowns. The technique actually does not grow at all with the dimension (with a caveat I am about to give). Rather, it depends on the mutlivector being inverted, not the dimension n. For a given multivector I compute a number m and solve m equations in m unknowns. Generally m is not much larger than the number of terms in the multivector. So, a multivector with, say, 6 terms, might require only 8 equations and it would not matter how large the highest grade of the multivector is. I have tried a number of random such multivectors in dimension 50 and they compute instantly, and of course it is even easier to multiply the multivector and its inverse to confirm. The caveat is that if the multivector has some large number of terms, say half of 2n, then m will equal 2n. So this technique will still be very slow for a general symbolic multivector in dimensions 5 and above but can be very, very fast, for other multivectors. I put this latest version on my web site.
 * This sounds plausible, since inverses will be limited to a subspace dictated by the form of the multivector (in the sense of being a linear sum of blades), as well as being interesting (in the sense of leading to a workable implementation of the inverse). It would need checking, of course.  —Quondum 16:32, 27 October 2019 (UTC)


 * Using my package to investigate lots of cases, I have found a very pretty formula for the inverse of a general symbolic bivector in 4 dimensions. As you know, in 3 dimensions we get A-1 = -A / ||A||2. In dimension 4 this formula does not work. Here is the formula I have found. Let R = grade 3 slice of A, S = A - R. So, A = R + S. Let B = R - S. Then A-1 = BBA / ||AB||2. The dim 4 formula reduces to the dim 3 formula for bivectors of dim 3 since AA = - ||A||2. Have you seen this formula or anything like it? It may be in some book but I can't find it on the web. Unfortunately, I can think of several natural ways to extend this to dimension 5 and higher, so I am afraid that task may require considerable work. Clearly the numerator goes from -A to BBA and then possibly to CCBBA for some C, and the denominator would grow from ||A||2 to ||AB||2 to ||ABC||2. I will probably use my package to check a couple obvious candidates for C but if I don't find it fairly fast I think I will move on to other things. After finding a formula for dim 5, it will probably be obvious what the general formula is matrixbud (talk) 02:16, 25 October 2019 (UTC)
 * I'll need to think about this. For ease of editing, I'm creating a subsection below: , and copying your statement above.  —Quondum 16:32, 27 October 2019 (UTC)


 * Sounds like a good way to go: one will not be bumping into the constraints when using it. For performance, one would have to get specialized anyway, probably not a serious consideration now.  —Quondum 13:06, 12 October 2019 (UTC)


 * I see that something almost identical to my results was published a few months after my stuff was posted online (but considerably more thorough): . This has since been generalized to an arbitrary number of dimensions, e.g. here.  You are welcome to use "my" formula – I regard it as free for use, since I chose to post it in an open forum and it has been superseded by other work that was likely independently produced at about the same time.  You may even prefer the arXiv paper's version.
 * Trying to squeeze all special cases into a single formula seems a bit pointless: it is simpler if the cases are kept separate. The scalar+vector case for both our formulae will inherently work in any number of dimensions: this is easy to prove.  My grade-2 case does not work for scalar+vector+bivector in three or more dimensions, but looks like it should work for scalar+bivector for any number of dimensions (I expect a proof would not be difficult); this probably is equivalent to your formula.  You'll notice a lot of repetition in my formulae, which makes them easy to optimize.  If you are dealing with special cases, versors (rotors and rororeflections) are a significant class of multivectors that are easy to invert in any number of dimensions: M$−1$ = ~M / (M ~M); here the denominator is always a scalar.
 * If Mathematica was smart enough to frame the solution as a matrix inverse, it would probably be fast. Algebraically, every GA can be written as a complex matrix algebra (or the direct sum of two copies), which makes the inverse problem fairly uninteresting academically, although not bad for building understanding.


 * I can't imagine that Mathematica doesn't use matrix algebra for its solutions to the 'Solve' function. The matrix determinants are surely what cause the symbolic formulas to balloon. matrixbud (talk) 19:11, 11 October 2019 (UTC)
 * One could check by timing experiments. This is the kind of thing they are likely to have spent time optimizing (unlike the myriad things they don't bother with, like a broken 'undo' operation or other usability issues).  'Solve' is quite generic, but I suppose they might detect when it is linear.  Most linear equations are not solved using a general matrix inverse: much slower than Gaussian elimination for sparse matrices. Also, they should find partial solutions, which a matrix inverse will not do.  Anyhow, not something of real interest here, I guess: 'Solve' probably works pretty well, and probably will not be improved on readily.  —Quondum 13:06, 12 October 2019 (UTC)


 * I haven't had much time to do any checking, and would anyway not be interested in restricted cases. —Quondum 02:34, 2 October 2019 (UTC)


 * I will check out the other references. Thanks.
 * I had in mind to develop the general case and I was working my way up to it via special cases that work for every n as opposed to your approach of equations that work for all multivectors but for only a given n. I wanted to try my own approach before looking at yours because, as you know, once you have seen someone else's approach it is hard to develop something fresh. So far I like your approach better and I was pretty sure that it generalized to any number of dimensions, which you have now confirmed. FYI, while programming your equations I had just gotten stuck on dimension 3 not working for, say, scalar + trivector, and I was going to ask you if I had made a mistake. So I appreciate your pre-empting me with notice that it doesn't always work.
 * By the way, my formula is very similar to yours though it may not appear so at first glance. For example, my constant - FreeTerm is just a sign change from +++... to +---... . And my Gorm - 2 constant is just another sign change to the denominator that we both use. The (-1)grade term is just another ± sign change.
 * I still think it is possible that my single formula, which computes almost instantaneously (I have only checked up to dim = 7 thus far) will in fact generalize to any multivector in any dimension with just a few more tweaks, but of course I won't know till I try. I will probably work on it a little more.
 * matrixbud (talk) 15:18, 2 October 2019 (UTC)


 * You're right – it does make sense to look into something first without looking at what someone else has done. And yes, I recognized that your formula is similar up to grade 2.
 * For clarity, each of my formulae should work for all multivectors up to the dimension and grade that it is intended for, e.f. f$4$ should work for all invertible multivectors in a Clifford algebra of a vector space of dimension 4. If it doesn't, it is possible that I have a minor error, but the first reference I gave gives an almost (bit not quite) identical expressions for which grades to negate, which might indicate a slip on my part.
 * Don't get your hopes up about "a few more tweaks". This operation is (at best) computationally equivalent to inverting a 2$n/2$×2$n/2$ matrix, which is more complex than a fixed number of geometric multiplications.  This is reflected in the growing complexity of the expression in the maximum dimension.  Also, clearly there is academic work that comes to the same conclusion.  —Quondum 20:32, 2 October 2019 (UTC)


 * I've put together a symbolic manipulation of GA expressions. I have an operator that will negate any chosen selection of grades (what you call slices).  For any of the expressions of the type that I gave above (where the product of the original multivector and the numerator clearly equals the denominator), it is necessary and sufficient that the denominator evaluates to a nonzero scalar for the expression to be the inverse of a multivector.  Using this, I can show that my f$2$ works for bivector or scalar+bivector only up to dimension 3, and for scalar+vector+bivector only up to dimension 2.  My expectation is that your expression would have a very similar limitation: that your inverse will not work for a general bivector in 4 dimensions.  —Quondum 02:18, 3 October 2019 (UTC)
 * A specific counterexample: try e$0$e$1$ + e$2$e$3$. —Quondum 02:22, 3 October 2019 (UTC)


 * Just back from a tennis tournament. Exhausted but just read this and checked. My formula gives that the inverse is (- e$0$e$1$ - e$2$e$3$) / 2. I hope to get back to this tomorrow or at least within a few days. matrixbud (talk) 22:42, 6 October 2019 (UTC)


 * As an aside, I have a rather different approach that I'm interested in – more theoretical than anything. This involves defining only the geometric product, but in a balanced algebra over a space of double the number of dimensions.  In this, the exterior product on the original space is just the geometric product.  Pretty much all the operators become very simple expressions using the geometric product.  —Quondum 13:06, 12 October 2019 (UTC)

I have made all the suggested improvements that I was able to..., and more. Thank you, Quondum. I have uploaded the new version to my web site.

While taking the suggestions for "correct" naming I also used this opportunity to improve the names of several of the functions. A list of the changes is found at the bottom of the source file where I explain version changes. One example is that I kept the name pSliceG for a slice of an existing multivector, but I renamed SliceG to GradedClifG for a generic homogeneous multivector since it is not a slice of anything. I also implemented use of the term "atom" to clarify the names of several functions since "atom" is also such a descriptive term.

Now I just need to figure out a way to "advertise" this package (freeware) since I think the package is very hard to find, even when someone searches the web for a GA mathematica package. I, myself, cannot find it in a search.

matrixbud (talk) 01:45, 13 October 2019 (UTC)


 * I confirm your calculations: my formulae are incorrect; they only work when d is a scalar. You'd have to use the grade negations given in the paper that I linked to above. (arXiv 1104.0067, also cited by others).  A formula that is ever incorrect is invalid.  The paper seems to produce the correct results, though I have a bug in my Mathematica package that is giving me a headache, preventing me from reliably reproducing their results.  I have not looked at your revised version yet.  As to publishing your package, I don't see much point until it is fairly polished and powerful (including graphics), else it will attract no interest.  You should get a sense of what other packages are out there.  For discussions of implementations, there is a group with occasional activity at Geometric_algebra – Google groups that you could join.  —Quondum 02:27, 16 October 2019 (UTC)
 * I meant to mention: you seem to highlight a few selected signatures. In the context of geometric algebra, arbitrary signatures (indeed, an arbitrary assignment of sign to the square of each standard basis element) is pretty much essential for everyday usage.  For example, CGA of a Minkowski space could have a signature (+, −, +, −, −, −).  The signatures that I am most interested in are balanced, e.g. (+, −, +, −, +, −, +, −).  —Quondum 13:34, 16 October 2019 (UTC)
 * Not that this result would be of interest to you (since you have a reasonably efficient functioning general inverse), but I have confirmed the results of the paper on a general orthogonal basis in dimensions 0 through 5 using my Mathematica package that takes a more general symbolic approach. In five dimensions it takes a portion of an hour, and det[X] has a little over  terms fully expanded.  This is essentially just a check of my package (still not guaranteed bug-free!).
 * What might interest you is the greater power in the basic operations of addition and multiplication: it will deal with arbitrary symbols (no need for a basis), but exploits two types of tag: one to identify an expression as a grade-0 element, and one to identify a symbol (of any form, e.g. subscripted) as a grade-1 element (a vector). You can specify a canonical ordering of vectors, and it will swap vectors using the (not necessarily known) bilinear form, resulting in expressions (or subexpressions) composed of scalars and vectors being simplified.  One can specify the bilinear form between any pair of vectors, for example a between symbols that you wish to treat as the basis, but even if you don't, it understands that the geometric product of a vector with itself is a grade-0 element.  This could serve as the "core" of your geometric product, making it more capable.  As an example one could convert between different bases of the same vector space, which you cannot do in your package.  —Quondum 14:25, 20 October 2019 (UTC)

Bivector inverse proposal
Using my package to investigate lots of cases, I have found a very pretty formula for the inverse of a general symbolic bivector in 4 dimensions. As you know, in 3 dimensions we get A-1 = -A / ||A||2. In dimension 4 this formula does not work. Here is the formula I have found. Let R = grade 3 slice of A, S = A - R. So, A = R + S. Let B = R - S. Then A-1 = BBA / ||AB||2. The dim 4 formula reduces to the dim 3 formula for bivectors of dim 3 since AA = - ||A||2. Have you seen this formula or anything like it? It may be in some book but I can't find it on the web. Unfortunately, I can think of several natural ways to extend this to dimension 5 and higher, so I am afraid that task may require considerable work. Clearly the numerator goes from -A to BBA and then possibly to CCBBA for some C, and the denominator would grow from ||A||2 to ||AB||2 to ||ABC||2. I will probably use my package to check a couple obvious candidates for C but if I don't find it fairly fast I think I will move on to other things. After finding a formula for dim 5, it will probably be obvious what the general formula is matrixbud (talk) 02:16, 25 October 2019 (UTC)


 * I should be able to check this in the general case. What is your notation ||⋅||?  The gorm  norm ?  —Quondum 16:32, 27 October 2019 (UTC)


 * I'll use the notation ⟨~X X⟩$0$ to denote the gorm of X, rather than using ||X||$2$ (to avoid problems with signs). I do not restrict the signature of the quadratic form.
 * A$−1$ = ~A / ⟨~A A⟩$0$ holds iff ~A A = ⟨~A A⟩$0$ ≠ 0. For this to hold, we only have to prove the latter.
 * Let A be a bivector in three dimensions. Calculating the latter confirms that it is a scalar: we use an orthogonal basis, where g$ii$ ≝ e$i$⋄e$i$:
 * A = a$12$e$1$⋄e$2$ + a$13$e$1$⋄e$3$ + a$23$e$2$⋄e$3$
 * ∴ ~A A = −A$2$ = g$11$g$22$a$12$$2$ + g$11$g$33$a$13$$2$ + g$22$g$33$a$23$$2$
 * Let A be a bivector in four dimensions. You say "Let R = grade 3 slice of A", which I interpret as R = ⟨A⟩$3$, which is inherently zero for any bivector A, so I get stuck at this point.  You also say "S = A - R. So, A = R + S. Let B = R - S", hence B = R − S = R − (A − R) = −A.  What have I missed?
 * I note, however, that your expression is quadratic in the denominator and cubic in the numerator. This is the same as for the general inverse in four dimensions given by the reference that I mentioned, though you get a scalar from a quadratic expression before squaring again.  I will need the above issues to be rectified before I can dig into this further.
 * —Quondum 02:29, 29 October 2019 (UTC)

General inverse
I have looked at your approach for a general inverse for general multivector that is quick for sparse inputs as your describe your implementation comments, namely where you effectively find what you call atoms in the powers of the multivector to be inverted, and solve a set of linear equations for their coefficients. I'm pretty convinced that this is sound. The method of multiplying conjugations that I gave (albeit with errors) will likely be faster by a small factor if the coefficients are extracted and appropriately multiplied, but beyond dimension 5 it fails, whereas yours will work in any number of dimensions. Since this is a quick and useful general approach, I don't see much point in pushing this more. —Quondum 02:15, 3 November 2019 (UTC)