Talk:Simplex noise

Drawbacks?
Any drawbacks, as compared to Perlin? Shinobu 05:39, 9 December 2006 (UTC)


 * Several
 * 1. 2D Perlin, especially improved Perlin, is probably going to be faster in a software depending on how you implement either.  Simplex noise is really built to be scaled into higher dimensions.
 * 2. It's generally more complex to understand the implementation.
 * 3. Strictly speaking, tiling the noise isn't intrinsically possible.  There are ways around this, but they involve either sacrificing a little isotropicness or using 2^(number of dimensions) more noise and LERPing between dimensions...
 * Hope this helps. I've glossed over the points made here.  I can expand on them if anyone is interested.  --Numsgil 06:28, 9 December 2006 (UTC)


 * I probably do not properly understand any of these points. Here's my current take:


 * 1. Is not the low-dimensional speed issue an implementation issue, and not anything intrinsic in the algorithm?


 * 2. Is not the understanding issue more about available documentation than the algorithms?


 * 3. Aren't tiles necessarily non-isotropic?  (If they were completely isotropic, why use tiles at all?)


 * 159.54.131.7 21:10, 1 October 2007 (UTC)


 * 1. Maybe, but I've never seen a Simplex implementation that beats a Perlin one.  'Course, I've only ever seen two :P
 * 2. There is a real difficulty understanding/visualizing how to determine which simplex a point is in in higher dimensions.  That's the primary stumbling block.  The rest follows like regular Perlin.
 * 3. I'm not sure I'm understanding your point.  To my original statement, the way to tile the noise is to nudge the simplexes from being purely equilateral (which causes irrational numbers to come into play) to being almost equilateral using rational numbers.  I can dig up the email I got last year if anyone's interested in the details.  --Numsgil 07:35, 3 October 2007 (UTC)

"Squashed hypercube?"
Perhaps there should be some more discussion of what a "simplex" is in this context?

Gustavson's external link talks about treating n! n dimensional hypertriangles as a squashed hypercube, but does not describe how that works for n arbitrary dimensions -- the code provides examples for 2, 3 and 4 dimensions but does not describe the mapping -- nor the details of the reverse mapping -- in enough detail for me to derive the general case.

For example, in three dimensions, the mapping is: s=(x+y+z)/3 +s

At this point, the integer parts of the vector are extracted as  and a reverse mapping is made, where the reverse mapping is  t=(X+Y+Z)/6 -t

Identifying this vector as , the coordinates within the hypercube are then  - 

Working through this with some arbitrary random numbers <10.8839, 0.305498, 16.3427> the mapped squashed hypercube origin seems to be <20, 9, 25> and the reverse-mapped coordinates from within this hypercube would seem to be roughly <-0.1161, 0.305498, 0.3427>

But from the description of the algorithm, I expected all coordinates within the hypercube to have values between 0 and 1.

Is my understanding of these concepts flawed? Is the implementation of the idea flawed? Is the idea itself flawed?

Perhaps someone with more familiarity with this subject could describe this geometry of squashed hypercubes? —Preceding unsigned comment added by 159.54.131.7 (talk) 21:37, 14 September 2007 (UTC)


 * To be honest, I always found this part a little confusing as well. I never understood how the mapping was chosen, if there are other possible mappings, etc.  You might give Gustavson an email asking about it.  I contacted him about a year ago when I was working on tiling the noise, and he was quite helpful.  I do remember that the nice round numbers for the 3D case don't hold true for other dimensions.  If you get an answer, please be sure to write it here ;)  --Numsgil 03:44, 15 September 2007 (UTC)


 * According to Stefan, I should have started with a vector of numbers in the domain [0, 1] before hitting the example code in the .pdf —Preceding unsigned comment added by 159.54.131.7 (talk) 17:59, 17 September 2007 (UTC)

Order of complexity
Seems we have a disagreement between the sources on the order of complexity. My guess is that the way it's presently known it's O(n). Since conceptually you're doing # dimensions * #vertices evaluations. Maybe Perlin's comment there is for a working draft and he fixed it later in the final presentation? --Numsgil (talk) 03:13, 21 June 2008 (UTC)
 * Perhaps, but it's usually the man who invented the algorithm that knows the most about it. Perhaps I should email him and ask? Darkuranium (talk) 16:50, 5 September 2008 (UTC)
 * Sure. How would we cite it, though?  --Numsgil (talk) 06:45, 6 September 2008 (UTC)
 * In N dimensions, a simplex has N+1 corners, and for each simplex corner you need to evaluate a polynomial in N dimensions. Therefore, the total computational complexity scales roughly as O(N^2), although the rate of increase is closer to O(N) in lower dimensions. Ordo-stuff assumes N is large, but here we're dealing with single digit numbers. As noted above, 2D simplex noise involves a little more work than 2D classic noise. (Stefan Gustavson, not logged in) 130.236.243.226 (talk) 13:00, 4 February 2011 (UTC)

Suggestion: Example Image
I think that an example image of what simplex noise looks like would be rather helpful. --83.100.151.189 (talk) 06:52, 7 September 2015 (UTC)

Patent
Has the patent expired? Awesomecat713 (talk) 02:53, 15 January 2022 (UTC)