Wikipedia:Reference desk/Archives/Mathematics/2010 June 5

= June 5 =

General statistics question
Consider a function $$f$$. Suppose you perform a series of experiments with several values of $$x$$, and record the values for $$f(x)$$. Is there a way to derive from this information an approximation for the function $$f$$?--Wikinv (talk) 13:05, 5 June 2010 (UTC)
 * Yes, lots of ways. Which you would use depends on what kind of function you are expecting (linear? quadratic? a higher degree polynomial? exponential? logarithmic?). What kind of function you should expect will depend on what the function represents. You can't work out what to expect just by looking at the values you have - for example, if you have n points you will definitely be able to find an nth degree polynomial that is a perfect fit, but it would be pure luck if that was right, since it depends on how many points you measured, which is arbitrary. It also depends on whether the values for f(x) that you record are exact or have some error to them. So, in short, you need to give us more information about what you are trying to do. --Tango (talk) 13:19, 5 June 2010 (UTC)


 * Curve fitting covers this briefly. It helps if you have some expectation or a priori knowledge of what the shape is going to be. If not plotting the curve and looking at it is a good place to start - then you can see if it is straight, curving upwards or downwards, oscillating, hump shaped and tending to a particular value at extremities etc - the shape of the curve helps youchoose where to start with the curve fitting equations.87.102.43.94 (talk) 15:45, 5 June 2010 (UTC)
 * If there are no errors in the measurements of f, then cubic splines is a popular method for interpolation. If you're trying to find the function f exactly, assuming it has some simple form, it can be tricky - but often can be done by applying various transformations to the values until you come up with something recognizable.
 * If there are errors, and to compensate you have relatively many observations, one typically uses some form of least squares fitting. Nonparametric regression is an approach that assumes as little as possible - Kernel regression is its staple method. -- Meni Rosenfeld (talk) 18:07, 5 June 2010 (UTC)

Symmetry in a multi-dimensional space
Are the points (1,2,3,4), (1,3,2,4) symmetric in a four-dimensional space (or in any multi-dimensional space, in which the vectors are embedded)? i.e. can either vector be obtained from the other one by translation, reflection and/or rotation in a four- (or multi-) dimensional space? HOOTmag (talk) 18:15, 5 June 2010 (UTC)
 * The sequences you describe are 1 dimensional (you definately didn't mean two points in four or more dimensions ?) - in a normal eg euclidean 1d space there's no way to do it, the only way I can think of is to define a sort of 'fracture plane' ie boundary in thee number line to do, (or even more complicated non-uniform descriptions).87.102.43.94 (talk) 18:42, 5 June 2010 (UTC)
 * Yes, I really meant a point in a four dimensional space. HOOTmag (talk) 19:00, 5 June 2010 (UTC)
 * Yes, one point is a reflection of the other. -- Meni Rosenfeld (talk) 19:05, 5 June 2010 (UTC)
 * Do all permutations - of a given sequence (point) of n-dimensional space - constitute reflections of each other, in that space? HOOTmag (talk) 19:21, 5 June 2010 (UTC)
 * Well, yes, but any two points P and Q in n-dimensional space are symmetric under reflection in the n-1 dimensional space that is perpendicular to the line between P and Q and contains the mid-point of PQ. Gandalf61 (talk) 19:30, 5 June 2010 (UTC)
 * Thanx much. HOOTmag (talk) 10:12, 6 June 2010 (UTC)
 * niggle. if it's a (Euclidean vector) vector and not a point then the transformation isn't possible (using the 3 transformations above) if the magnitudes are not the same, (?) I suppose point were meant. Think I'm getting lines and vectors mixed up. Ignore me.87.102.43.94 (talk) 20:18, 5 June 2010 (UTC)
 * Anyways, thank you for your response. HOOTmag (talk) 10:12, 6 June 2010 (UTC)
 * Well, you can translate one vector by the difference between the two and get the other vector... that's true in any dimension... --Tango (talk) 19:12, 5 June 2010 (UTC)
 * Of course. I had to omit the option of translation... HOOTmag (talk) 19:21, 5 June 2010 (UTC)


 * Any permutation of the vector components will have the same norm, so they're all rotationally symmetric in any direction. Within permutations of a vector alone, however, one can analyze it in terms of a permutation group, which has some fun properties and is usually used as an introduction to group theory. The actual rotation group for all reals in 4D, though is SO(4), which is a group that comes in handy in physics (well, in 3d it does). SamuelRiv (talk) 19:31, 5 June 2010 (UTC)
 * Thanks a lot. HOOTmag (talk) 10:12, 6 June 2010 (UTC)

Any two points in 4-dimensional Euclidean space are reflections of each other if you suitably position the "mirror". Just make it a 3-dimensional affine subspace that is the perpendicular bisector of the line segment between the two points.

But if you're interested in permuting the four Cartesian coordinates, then here's what happens: there are 24 permutations. All of them have the same sum of four coordinates; therefore all of them lie in a common 3-dimensional affine subspace. Since they lie in a common 3-dimensional Euclidean space, one can draw a picture of the polyhedron whose vertices they are. It's a polyhedron with 24 vertices, 36 edges, and 14 faces. If you feel that Greek-derived names make something sound hifalutin and scientific, it's sometimes called a tetrakaidekahedron because it has 14 faces. And sometimes it's called a permutohedron because of the way it's derived from permutations as described above. Michael Hardy (talk) 00:27, 6 June 2010 (UTC)
 * Oh, that's wonderful! Thank you! HOOTmag (talk) 10:12, 6 June 2010 (UTC)