Wikipedia:Reference desk/Archives/Mathematics/2007 July 30

= July 30 =

History of Laplace Transform
I had a dream in which Mr Laplace in a complete fit of rage changes history.

Instead of writing


 * $$F(s) = \mathcal{L} \left\{f(t)\right\}=\int_{0^-}^\infty e^{-st} f(t) \,dt. $$

He wrote in pure rage


 * $$F(s) = \mathcal{L} \left\{f(t)\right\}=\int_{0^-}^\infty e^{st} f(t) \,dt. $$

Now would Laplace transform still work? Why (historically) did Laplace choose e^(-st) instead of e^(st) in defining his famous transform?

202.168.50.40 01:06, 30 July 2007 (UTC)


 * Funnily enough, that sounds like how some of my science teachers posed questions (that and "You're stuck on a deserted island, and you want to make soap, and all you have is a small chemical laboratory."). That said, what's the difference between the two? Specifically, if we call the first one F1 and the second F2, say, is there an easy transformation between them? (Hint: The answer is yes.) In which case, what kind of functions does F1 work on, and what kind of functions does F2 work on? For that, you may want to look at Laplace transform and Laplace transform. If you apply the substitution you found to switch between your two transforms, how does that change the region in which the transform is defined, and which seems more "intuitive"? (I could give you a straight answer, but personally I think this is not too hard to do yourself, and you'll probably understand it better that way. Ask for clarification if you want it, though.) Confusing Manifestation 04:14, 30 July 2007 (UTC)


 * The Laplace transform F(s) typically exists for all complex numbers such that Re{s} > a, where a is a real constant which depends on the growth behavior of f(t), whereas the two-sided transform is defined in a range a < Re{s} < b. The subset of values of s for which the Laplace transform exists is called the region of convergence (ROC) or the domain of convergence. In the two-sided case, it is sometimes called the strip of convergence.
 * That's all good but why do we care whether Re(s) > a or if we use the alternative definition and get Re(s) < a . The reason being that the Alternative Laplace transform still works but with the modification that s is now replace with -s
 * My gut feeling is that Laplace is in love with the Heaviside Step Function and choose the normal definition of Laplace transform to get L{u(t)} = 1/s because if he uses the Alternative definition, he would have got altL{ u(t) } = -1/s instead.
 * 211.28.125.199 09:35, 30 July 2007 (UTC)
 * My gut feeling is that Laplace is in love with the Heaviside Step Function and choose the normal definition of Laplace transform to get L{u(t)} = 1/s because if he uses the Alternative definition, he would have got altL{ u(t) } = -1/s instead.
 * 211.28.125.199 09:35, 30 July 2007 (UTC)


 * Probably to some extent that's true. The thing is, if you use the traditional definition, then the transforms are defined for a whole bunch of functions in the region s > 0, whereas your version has them defined for s < 0. Either choice is valid, but I think that as humans we "like" positive numbers more, and will tend to give them precedence, and I suspect that that's what Laplace did. Confusing Manifestation 23:23, 30 July 2007 (UTC)

Vectors
I have just been introduced to the idea of vectors and I don't fulyl understand the difference between a free vector and a position vector. Could someone explain? The One 14:48, 30 July 2007 (UTC)


 * Consider the real number line. Zero is a point distinguished from all others. Free vectors are differences, displacements. For example, there is a vector from 16 to 38 which is 22 units long in the positive direction; this is the same difference (displacement) as that from 17 to 39. Position vectors are displacements from a fixed point, which in this case we choose to be the zero point. Thus the vector of length 22 in the positive direction gives the point at 22.
 * The same idea carries over to the Cartesian plane, and to 3D space, and so on. --KSmrqT 15:46, 30 July 2007 (UTC)


 * Also, in Homogeneous coordinates, these things can be distinguished by the value in the extra component. A position generally has a 1 there, while displacements have 0; this has the interesting property that you can't add a position to a position (without destroying the value 1), but you can add a displacement to a position, or a displacement to a displacement. Also, only positions can be returned to non-homogeneous space by dividing by the extra component, as displacements cause a division by zero. - Rainwarrior 16:32, 30 July 2007 (UTC)

Exponential Trend
The caption on the first image at Technological singularity describes the image as showing an exponential trend. Is this correct? In my understanding, if one axis has an logarithmic trend, an exponential relationship would appear as a straight line. However, if both axes are exponential scales, only a linear relationship would appear as a straight line, though with different distribution of points along that line. To me, this image appears to show a linear trend. What am I missing here?  j e f f j o n  18:59, 30 July 2007 (UTC)
 * You're right to be suspicious. I think the relationship is better described as a power law (that is y=bx^a, with a and b constant), rather than exponential (y=ba^x).  Power law relationships include linear, but could be more.  Probably best to bring it up on the talk page of the article (I'm hesitant to make the change since I don't know if the graphic is incorrect and the caption wrong, or vice versus).  Well spotted!  --TeaDrinker 19:49, 30 July 2007 (UTC)
 * Looking at the enlarged image reveals that its meaning is as clear as mud. The claim that this is exponential is not even wrong, as it is incomprehensible (to me at least) what is being plotted as a function of what. Whatever it may mean, a straight line in a logarithmic graph indeed represents a power law, where the power is determined by the slope; the slope in this case reveals that the relation is, indeed, linear. -- Meni Rosenfeld (talk) 20:03, 30 July 2007 (UTC)
 * What the graph probably means is that the rate of change of something is linearly proportional to that thing. If that is so, then the thing indeed grows exponentially, though this is not directly reflected in the graph. -- Meni Rosenfeld (talk) 20:12, 30 July 2007 (UTC)


 * I think it's complete hogwash, indeed the Technological singularity hypothesis is complete hogwash so its an appropriate image. Trying to Extrapolate from events which happen millions of years back to a date in the next few decades is very suspect. Here a fun game to try, assume there have been ten major paradigm shifts events in the history of the world. Say these are at -10^1, -10^2, ..., -10^10 with time measured from now. Let x=10^i, y=10^i-10^(i-1) (the time between sucessive events). Plot -log(-x) vrs log(y) and you get a graph very similar to the image. Now assume that we are at some other time in history say s years from now. Plot log(s-x) vrs log(y). You will get a very similar graph, but with a hockey-stick bend at the end (that is close dates). If you discount the events in the last thousand years or so, the other points give a graph will appear to lie on a straight line which appear to point to our new time. Thats is if people at some other point in history repeat the experiment with the same data they will see that the technology singularity is about to happen. Of course it is reasonable to discount recient time points as they are so linked to our outlook on what is important. --Salix alba (talk) 20:43, 30 July 2007 (UTC)

How can a set not be a member of itself?
I'll say up front that I'm not a mathematician, and I have no knowledge of set theory. I was browsing an article on Russel's paradox here: It's not the paradox itself that's my question, but one aspect about set theory. The article talks about sets that aren't members of themselves. But how can that notion be possible? Doesn't anything have to be a member of itself? This is the example given in the second paragraph:

''Some sets, such as the set of all teacups, are not members of themselves. Other sets, such as the set of all non-teacups, are members of themselves.''

I don't see how that example works. I'm equating "being a member of itself" as "being equal to itself". I'm guessing I'm mistaken there?128.163.224.198 20:36, 30 July 2007 (UTC)


 * If a set is an element of itself, it contains itself as a member:
 * Sets can contain other sets, for instance: A = {1, 2} B = {3,4 } C = {A, B} = {{1,2}, {3, 4}}
 * For a set to be an element of itself, it would be something like: G = {G, ...}
 * The weirdness of a set containing itself is what leads to the paradox.


 * The first obvious thing to do is take a look at set. Now, "being a member of" is not the same as "being equal to". A set is intuitively a collection of objects. An object is a member of the set if it is one of the objects in the collection. For example, the set {1, 2, 3} has 3 members - 1, 2 and 3. So 1 is a member of the set ($$1 \in \{1, 2, 3\}$$), and so are 2 and 3. If x is anything which is neither 1, 2 nor 3, then $$x \notin \{1,2,3\}$$. In particular, {1, 2, 3} is not 1, it is not 2 and it is not 3, so $$\{1, 2, 3\} \notin \{1,2,3\}$$. -- Meni Rosenfeld (talk) 20:58, 30 July 2007 (UTC)
 * As for the example - "The set of all teacups" means the set whose members are all the teacups and nothing else. So anything which is a teacup is a member, and anything which is not a teacup is not. The set itself is not a teacup, so it is not a member. -- Meni Rosenfeld (talk) 21:01, 30 July 2007 (UTC)


 * Perhaps you are confusing member with subset. A set 'a' is a subset of another set 'b' if b contains all of the elements of a.  The subset property is always reflexive -- i.e. For any set 'a', a is a subset of a.  Hope this helps!   Isaac 21:06, 30 July 2007 (UTC)


 * If you take the membership list of the Slug Club, you'll see that the Slug Club itself is not listed as a Slug Club member. That is because the Slug Club is not a member of itself. In fact, most clubs and other organizations that have members are not on their own membership lists. Same with sets. --Lambiam 23:11, 30 July 2007 (UTC)


 * I'm sort of following all these. But what I also don't get then is how Other sets, such as the set of all non-teacups, are members of themselves.128.163.224.198 19:28, 31 July 2007 (UTC)
 * The set of all non-teacups (if it exists), has as members everything which is not a teacup. The set itself is not a teacup, so it is a member.
 * Of course, in standard set theory as described by the Zermelo–Fraenkel axioms, it can be shown that, in fact, there are no sets which are members of themselves. For example, the collection of all non-teacups is not a set at all. -- Meni Rosenfeld (talk) 19:35, 31 July 2007 (UTC)
 * I see. So since the set is an abstract mathematical idea, not an actual, physical teacup, it's a member of itself.  Makes sense.128.163.224.198 19:49, 31 July 2007 (UTC)
 * Yes. But this is all just an intuitive discussion. In the formal development of the theory, there are no teacups or any other physical objects. There is, for example, the set of all natural numbers, which is not a member of itself since it, itself, is not a natural number (though they are all abstract mathematical ideas!). -- Meni Rosenfeld (talk) 20:13, 31 July 2007 (UTC)
 * While most axiomatizations of set theory include no teacups, or indeed anything other than sets, it is, in fact, quite possible to include teacups as urelements if one wants to. —Ilmari Karonen (talk) 22:17, 31 July 2007 (UTC)

Point in relation to a line
Hello,

I'm looking for some R2 geometry help in a program I'm writing. I have a line defined by Pt1 and Pt2, with a direction from pt1 to pt2. Given a third point Pt3, I'd like a function to return

1 if the point is counterclockwise from the line, 0 if the point is on the line -1 if the point is clockwise from the line.

Determining whether the point is above or below the line is easier (using ax+by+c=0 form for the line and plugging in pt3). However, I'm not sure how to adjust for line orientation.

Thanks! Isaac 21:03, 30 July 2007 (UTC)


 * What you essentially need is the cross product of the vectors from Pt1 to Pt2 and Pt3. Practically, what you need to calculate is the signum of $$(x_2-x_1)(y_3-y_1)-(x_3-x_1)(y_2-y_1)$$. -- Meni Rosenfeld (talk) 21:14, 30 July 2007 (UTC)


 * As a practical matter, I highly recommend consulting the work of Jonathan Shewchuk on how to compute this robustly. Purely as theory, we may write the three points in homogeneous coordinates and look at the sign of the determinant of a matrix with those rows:
 * $$ \sgn \begin{vmatrix}x_1 & y_1 & 1\\x_2 & y_2 & 1\\x_3 & y_3 & 1\end{vmatrix} = \sgn \left( x_1 y_2 + x_2 y_3 + x_3 y_1 - x_3 y_2 - x_2 y_1 - x_1 y_3 \right) . $$
 * However, a floating point computation may encounter numerical hazards, which Shewchuk explains how to avoid, efficiently. --KSmrqT 22:44, 30 July 2007 (UTC)