Talk:Lenstra–Lenstra–Lovász lattice basis reduction algorithm

Requested move

 * Lenstra-Lenstra-Lovász lattice reduction algorithm → Lenstra-Lenstra-Lovász lattice basis reduction algorithm … Rationale: The basis is reduced, not the lattice. … Please share your opinion at Talk:Lenstra-Lenstra-Lovász lattice reduction algorithm. Nuesken 17:24, 18 May 2006 (UTC)


 * There are likely thousands of papers which talk about "lattice reduction"; it's a commonly accepted term. I think an appropriate title for the article is "Lenstra–Lenstra–Lovász lattice reduction".  Note that there is a Wikipedia article titled "lattice reduction". The current long title looks ridiculous. Sanpitch (talk) 00:39, 15 February 2020 (UTC)

2nd shortest not equal 2nd successive minimum
I attempted to clarify text related to some issues:
 * There is not 'the' shortest vector, since there are always at least two of them
 * One has to be very careful about stuff like "2nd shortest vectors", as such talk is often incorrect for dimension >= 5. For example, consider the parity lattice defined by $$\{ (x_i) \mid x_i \sim x_j \mod 2 \forall i,j \}$$. It has $$n$$ linearly independent vectors of length 1, yet its last successive minimum $$\lambda_n = \sqrt{n} > 1$$ (once the dimension n is >= 5). 131.234.72.252 (talk) 13:52, 16 June 2008 (UTC)
 * I apologize, what I wrote yesterday in a hurry is obviously incorrect. The parity lattice has a full set of linearly independent vectors of length 2 (not 1), and there are no shorter nonzero vectors in the lattice (for dimension >= 5). However, those vectors do not form a lattice basis, because the vectors of odd parity cannot be written as integer linear combinations of vectors of even parity. 89.245.87.65 (talk) 06:24, 17 June 2008 (UTC)

Description of the algorithm
It seems to me that a description of the actual algorithm (at least in high-level pseudocode) is needed in order for this article to be in accord with its title. —Preceding unsigned comment added by 84.97.180.156 (talk) 23:09, 22 November 2009 (UTC)

Lattice algorithms in general
I agree that more needs to be said about what the algorithm does. Moreover, I fail to see how a lattice, which is a partial ordering on a set of points, can have a basis in the first place.

Grandfather Gauss saw everything mathematical as embedded in some numerical space. As far as I kmow he only ever gave credit to another mathematician once in his lifetime - Bernhard Reimann.

He definitely did not expect the consequences of finite permutation groups (polynomials >=5) as outlined by Galois. But even to this day everyone embeds finite problems in larger spaces.

This simply is not discrete mathematics, and it addresses lattice traversal in a roundabout way. Reply: j.heyden@bigpond.com — Preceding unsigned comment added by 120.145.16.125 (talk) 08:41, 22 August 2011 (UTC)
 * The lattices that are partial orderings are not the same things as the lattices described here, which are regularly spaced sets of points in a Euclidean space. The fact that two such unrelated mathematical objects have the same name is unfortunate but true. —David Eppstein (talk) 17:08, 22 August 2011 (UTC)


 * The article's lead should be more explicit about the kind of lattice that is being talked about here. True, the word 'lattice' points to lattice (group), but I don't think that's sufficient. I've taken a stab. --Macrakis (talk) 23:44, 16 January 2015 (UTC)

Contradiction between algorithm description and example
There's a contradiction between the description of the algorithm and the example given. The description indicates the first loop is completed in its entirety before the reduction steps are executed for any basis vector. However, the example shows that the reduction condition is evaluated for the second basis vector before the third basis vector is ever initialized. It doesn't seem like the two steps are independent, so I would think that the order matters. Which is correct? --Prestidigitator (talk) 18:24, 4 February 2013 (UTC)
 * I agree, I tried to walk through the example following the algorithm, and I came to the same conclusion. Moreover, I think the pseudocode for the algorithm lacks some rigour. E.g., after the Gram-Schmidt is performed and k is initialized to 2, the algorithm says to check if $$|\mu_{i,j}| > \frac{1}{2}$$ then execute subroutine RED(k, k-1). This seeams weird as i=n and j=n-1 after the end of the Gram-Schmidt orthogonalization, and it is my impression that, really, we should be looking at $$|\mu_{2,1}|$$ at this point. At least that is also what is done in the example, although that is done in the middle of the GS process.
 * In the example, I also see something, which does not seem to be consistent. In step 6.1.i the vector b3 is reduced. OK, makes sense. Then the step 6.1.i is repeated with another $$\mu_{i,j}$$ (makes sense), where the reduction factor $$r=1$$ is found (OK), and then used to reduce $$b_3=b_3-b_2$$ (OK), but when the vectors are inserted, it is the original vector $$b_3$$, and not the one, which was just reduced by five times $$b_1$$ above. I find that confusing. Is that not an error? --Slaunger (talk) 20:43, 29 January 2014 (UTC)


 * I thought the example was overly verbose and kept only the initial basis and the reduced basis. I added an example over the complex integers. I guess Wikipedia is not a textbook, so there's no need to have such exquisitely detailed examples.


 * I also guess the pseudocode could be improved, though I'm not ambitious enough to do so now. Sanpitch (talk) 00:44, 15 February 2020 (UTC)

Update Pseudocode Formatting
I've re-written the LLL pseudo-code for this page. Should it be swapped in?

INPUT a lattice basis $$\mathbf{b}_0,\mathbf{b}_1,\ldots,\mathbf{b}_n \in \Z^m$$ a parameter $$\delta$$ with $$\tfrac{1}{4} < \delta < 1$$, most commonly $$\delta = \tfrac{3}{4} $$

PROCEDURE $$\mathbf{B^\ast} \gets {\rm GramSchmidt}(\{\mathbf{b}_0,\ldots,\mathbf{b}_n \}) = \{ \mathbf{b}_0^\ast,\ldots,\mathbf{b}_n^\ast \};$$ and do not normalize $$\mu_{i,j} \gets \frac{\langle \mathbf{b}_i, \mathbf{b}_j^\ast\rangle}{\langle \mathbf{b}_j^\ast,\mathbf{b}_j^\ast \rangle};$$  using the most current values of $$\mathbf{b}_i$$ and $$\mathbf{b}_j^\ast$$ $$k \gets 1;$$ while $$k \leq n$$ do for $$j$$ from $$k-1$$ to $$0$$ do if $$|\mu_{k,j}|>\tfrac{1}{2}$$ then $$\mathbf{b}_k \gets \mathbf{b}_k - \lfloor \mu_{k,j} \rfloor \mathbf{b}_j;$$ Update $$\mathbf{B^\ast}$$ and the related $$\mu_{i,j}$$'s as needed. (The naive method is to recompoute $$\mathbf{B^\ast}$$ whenever $$\mathbf{b}_i$$ changes:                $$\mathbf{B^\ast} \gets {\rm GramSchmidt}(\{\mathbf{b}_0,\ldots,\mathbf{b}_n \}) = \{ \mathbf{b}_0^\ast,\ldots,\mathbf{b}_n^\ast \};$$) end if end for if $$\langle \mathbf{b}_k^\ast, \mathbf{b}_k^\ast \rangle \geq \left( \delta - \mu_{k,k-1}^2\right)\langle \mathbf{b}_{k-1}^\ast, \mathbf{b}_{k-1}^\ast \rangle$$ then $$k \gets k+1;$$ else Swap $$\mathbf{b}_k$$ and $$\mathbf{b}_{k-1};$$ Update $$\mathbf{B^\ast}$$ and the related $$\mu_{i,j}$$'s as needed. $$k \gets \max(k-1,1);$$ end if end while return $$\mathbf{B^\ast}$$ the LLL reduced basis of $$\{b_0,\ldots,b_n\}$$

EDIT by Logannc11:

First of all, not really sure how I'm supposed to talk with you. This is the only article I've edit it.

That said, I have three things:
 * 1) Your revision of the swap step is misleading. 'Swap b_k and b_{k-1}' explicitly states that both change as opposed to the normal notation of b_k = b_{k-1}, which your use of swap seems to indicate. (FIXED). This was a silly mistake by me, I wrote (b_k, k_{k-1}) \gets (b_{k-1}, b_{k}) then undid it
 * 2) You're missing an 'output' section which, while obvious to some, should be there for completeness. And, lets face it, implementing a new algorithm can sometimes fry the brain, so the explicitness can be good. They might, for some reason, suppose that it is {b*...}. Shouldn't the algorithm return b*? EDIT: The orthogonalization of $$b$$ is not necessarily the same lattice. This is why we do $$b_k = b_k - etc$$ rather than use the orthogonalization. That is why we return $$b$$.
 * 3) The rest of the changes are largely aesthetic. I've seen algorithms displayed both ways and I am hesitant to say either is inherently superior. Do you feel that your edit improves on something in particular?

— Preceding unsigned comment added by Logannc11 (talk • contribs) 07:04, 20 May 2015 (UTC)

EDIT by Vrbatim:

My intention was to *only* improve the aesthetic layout of the algorithm (though your suggestions should be implemented). It is because I'm not sure which style of "pidgin code" is better that I'm putting it on the "talk page" instead of swapping it out.

Vrbatim (talk) 06:59, 22 May 2015 (UTC)

EDIT by Logannc11:

Do whatever you feel is best. But, again, note that we don't return $$b^*$$

Logannc11 (talk)

Propose to delete most of the page
The German version is better and shorter than this one. I propose to delete most of what we have, and to replace it with a translation of the German version. Before I do that, I'll wait a while to see if there is agreement on this. MvH (talk) 15:00, 31 October 2015 (UTC)MvH
 * Yes, please. This article needs help. (I assume you mean a hand-written translation, though; machine translations are not good enough.) —David Eppstein (talk) 17:54, 31 October 2015 (UTC)

Put "Applications" ahead of "Example"
The "Applications" section appears to me to be more useful than the overly-verbose "Example" section; I swapped them. I guess the example should be deleted, with the reference to the place it came from kept; I'll do that in a future edit. Sanpitch (talk) 01:12, 16 January 2020 (UTC)