User talk:Fly by Night/Archive Jun 10

Vector Calculus
Hello,

First, thank you for taking the time to give a detailed response. Unfortunately your post used quite a few terms that I'm unfamiliar with, so I just wanted to clarify what I understood from what you wrote. Basically, as I see it, there are two functions so to speak associated with f; one that maps the origin of the vectors (so f(0,0,0) = (1,1,0)), and another one which adjusts the the tangent bundle at each point (the differential?). Is this at all right? Probably not...I'm sorry for being a little slow, it's just that none of this (tangent spaces, fibre bundles, etc.) have been mentioned in my textbook thus far, and all the functions that I've dealt with take a vector as its imput and produce an imput, without concern for the origin (aka vector fields). Should I be more familiar with Linear Algebra before studying Vector cal? I sort of just picked up some textbooks to read over the summer, and I guess I didn't pay too much attention to what the prerequisite knowledge was.

At any rate, there's a partial solution at the end of the book. It's not very descriptive, but it seems to involve less alien terminology...perhaps you can make some sense of it. It goes as follows:

"Let g1 and g2 be C1 functions from R3 to R such that g1(x) = 1 for |x| < 2√2/3; g1(x) = 0 for |x| > √2/3; g2(x) = 1 for |x - (1,1,0)| < √2/3; and g2(x) = 0 for |x - (1,1,0)| > 2√2/3. Let $$h_1(\mathbf{x}) = \begin{bmatrix}

1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{bmatrix}\begin{bmatrix}

x_1 \\ x_2 \\ x_3 \end{bmatrix}+\begin{bmatrix}

1 \\ 1 \\ 0 \end{bmatrix}$$ and $$h_2(\mathbf{x}) = \begin{bmatrix}

0 & 0 & -1 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}

x_1 \\ x_2 \\ x_3 \end{bmatrix}$$ and put f(x) = g1(x)h1(x) + g2(x)h2(x).

I don't really understand why this works, or how it even resolves my dilemma regarding the whole origin problem. It seems to me like a vector is inputed, and a vector comes out. Anyways, maybe you can clarify what the solution means. Thanks a lot! 173.179.59.66 (talk) 18:20, 26 June 2010 (UTC)


 * Let's think of a concrete example of a map ƒ between ℝ3 and ℝ3, i.e. ƒ : ℝ3 → ℝ3. So for every p ∈ ℝ3 we have ƒ(p) ∈ ℝ3. At each point p ∈ ℝ3, the tangent space to ℝ3 at p, denoted by Tpℝ3, is the vector space of all vectors based at p. A vector based at p is a member of the tangent space Tpℝ3. Now, let's say that ƒ(p) = q. We can consider the tangent space at q, namely the vector space of all vectors based at q. So we have all the vectors based at p, denoted by Tpℝ3, and all the vectors based at q, denoted by Tqℝ3. What dƒp does is take a vector based at p and gives a vector based at q. It is a linear map between vector spaces. We have dƒp : Tpℝ3 → Tqℝ3. The tangent bundle of ℝ3 is an affine space.


 * I'm not sure I follow your solution either. Let x = i + j + k as you stated in your original problem. In other notation: x = (1,1,1) T. Then g1(x) = 0 since x. Furthermore g2(x) = 0 since x – (1,1,0). So ƒ is just the zero map.


 * If I were you then I would definetly brush up on your linear algebra. It's a basis for many things, and vector calculus is a next step after learning linear algebra. Also, try to read the affine space article. If you've made a mistake with copying the solution then come back to me; or if you work it out then come back to me. I would be pleased to hear how you get on... •• Fly by Night (talk) 19:53, 27 June 2010 (UTC)


 * First, yes you were right, I did make a mistake. It should read "g1(x) = 1 for |x| < √2/3; g1(x) = 0 for |x| > 2√2/3". I'm not sure if that changes any of your explanation.
 * Second, after reading your description I think I've come to understand the problem...at the very least, I've understood that I should study linear algebra! I still have one qualm though. My textbook claims that no prior knowledge of linear algebra is necessary (vectors and basic matrix operations are defined in the introduction). And I've scoured the index for any mention of tangent bundles, affine spaces, etc., to no avail. And thus far, without exception, all the maps I've encountered have been of the form f:R3 --> R2, (x,y,z) --> (exz, xy), or something like that. This example seems so far removed from anything that was done in the textbook so far. This leaves me with two question: 1) Is there a way to look at this problem with more elementary knowledge, in a way that should be expected of me? 2) Will I have to relearn vector calculus after studying linear algebra? Thanks a lot, you've been a lot of help. 173.179.59.66 (talk) 02:44, 28 June 2010 (UTC)
 * There won't be any mention of tangent bundles or affine spaces in you book; it's an elementary text after all. These are generalisations. Like I said: the set of all vectors based at a point in ℝ3 is the tangent space to ℝ3 at that point. All of the tangent spaces collected together form the tangent bundle. Talking in terms of tangent spaces and tangent bundles isn't necessary to solve your problem; in fact it seems to more of a hindrance. These tangent bundles are a generalisation that apply to many more circumstances than just ℝn; they apply to manifolds. I was hoping to get you to understand what you were doing when you were solving the problem, and not just how to solve the problem.


 * You don't need to learn linear algebra is you want to follow the book, do the sums and get the answers. If you want to have an idea of the bigger picture then a knowledge linear algebra would be an asset. Linear algebra is all about vector spaces and matrices, so it would obviously come in handy when doing vector calculus! I think the problem is that I've tried to explain things in too broad of a context, and for that I apologise; I think I've just confused things more than help them. So I'm sorry.


 * Just as a parting example consider your map ƒ(x,y,z) = (exz, xy). This takes the (0,1,0) to (1,0). The Jacobian matrix evaluated at (x,y,z), denoted by J(x,y,z), is the matrix of the differential:
 * $$J(x,y,z) = \left(\begin{array}{ccc} ze^{xz} & 0 & xe^{xz} \\ y & x & 0 \end{array}\right) \ \ \mbox{and so} \ \ J(0,1,0) = \left(\begin{array}{ccc} 0 & 0 & 0 \\ 1 & 0 & 0 \end{array}\right) . $$
 * Let's pick a vector based at (0,1,0); say (1,2,3). The image of the vector (1,2,3) is then
 * $$\left(\begin{array}{ccc} 0 & 0 & 0 \\ 1 & 0 & 0 \end{array}\right)\left(\begin{array}{c} 1 \\ 2 \\ 3 \end{array}\right) = \left(\begin{array}{c} 0 \\ 1 \end{array}\right). $$
 * So ƒ takes the vector (1,2,3) based at (0,1,0) in three-space to the vector (0,1) based at (1,0) in two-space. •• Fly by Night (talk) 19:13, 28 June 2010 (UTC)
 * OOOHHH okay now I see what you mean.
 * Alright, I understand the math, but I am ever so confused about the notation. You see, my book said that if f: Rn --> Rm, f is vector-valued and takes x to produce an m-tuple (f1(x), ..., fm(x)). So I thought that (f1(x), ... , fm(x)) itself was a vector (it's called vector-valued after all), but from what I've understood you're saying it's a point, and the vector at a point is the tangent to that point...so why do they call it vector-valued??? And why do they call it the vector emanating from wherever instead of the tangent vector emanating from wherever???(Sorry for the pestering questions, you must be pretty annoyed at me...but I feel like I'm at the precipice of understanding everything). 00:17, 29 June 2010 (UTC) —Preceding unsigned comment added by 173.179.59.66 (talk)
 * Don't be silly; you're not annoying me at all. Vectors and points are more or less interchangeable. Your book is treating ℝ n like a vector space, and each of its points as Euclidean vectors in that vector space. I think the idea is to get you away from thinking that a function takes a number and gives a number. A vector valued function (with the real line as its domain) takes a number and gives a vector, i.e. a point in n-dimensional space. To get to a point in space you travel in a given direction a fixed distance. But that's just what a vector is: something with direction and size. So for any point p in ℝ n we get a vector: the vector based at the origin and ending at p. Likewise, for any vector based at the origin we get a point in ℝ n : the point at the end of the vector. It's not so clear what we do with vectors not based at the origin. But the Affine Space article tries to examine this, albeit in an axiomatic way. Take a look at the article Vector Valued Function for some more details. •• Fly by Night (talk) 19:12, 30 June 2010 (UTC)

Great, thanks a lot, I understand now. You were a lot of help! 70.52.45.181 (talk) 03:30, 1 July 2010 (UTC)