User talk:Headbomb/Archives/2012/August

Gerald Guralnik
Hello! First of all sorry for my english, my mother language is spanish. I want to ask you for the last paragraph of Gerald Guralnik (I saw you have edited the article). Can be a copyright violation from this web? Thanks and best regards. Shalbat (talk) 09:36, 4 August 2012 (UTC)


 * Looks like it yes. Feel free to fix it. Headbomb {talk / contribs / physics / books} 16:13, 4 August 2012 (UTC)

Isidor Isaac Rabi
I'm looking for someone to step up and carry out a GA review of Isidor Isaac Rabi, which has been hanging around unreviewed since 19 June. Hawkeye7 (talk) 02:56, 10 August 2012 (UTC)

TfD for new Cite_web/smart
I am contacting you, per wp:CANVAS, after contacting other negative or positive editors, as a user previously opposed to quick, fast citation templates, in considering the latest TfD discussion. In this case, the template {Cite_web/smart} is finally the big upgrade to entirely replace {Cite_web} with a faster version that carefully checks the parameters to only invoke {Citation/core} for any rare parameters, else quickly formats a cite. See TfD of 11 August 2012:
 * WP:Templates_for_discussion/Log/2012_August_11

This notice is only an FYI, as announcing the discussion under way. Feel free to oppose the template, support the template, ignore the discussion, or even delete this message. The TfD just started, so there should be, at least, 7 days to consider the issues. Thanks. -Wikid77 (talk) 21:25, 11 August 2012 (UTC)

You might be interested
A sock from Sockpuppet investigations/BookWorm44/Archive, maybe Rhawn joseph himself, has created encyclopediadramatica.se/Douglas_Weller (can't use full url because of blacklist). This was after a similar article was created at Metapedia by someone who almost certainly is also User:Onion hotdog. Dougweller (talk) 10:50, 21 August 2012 (UTC)

Gravito-electromagnetism
Hi, I wrote in 2007 and publish in 2008 an article titled in English "Matter and the speed of light" the article is in Spanish.

I think this work is related with gravito-electromagnetism, I´d like you can read this information and maybe consider for the page gravito-electromagnetism. — Preceding unsigned comment added by 201.157.31.151 (talk) 16:14, 23 August 2012 (UTC)


 * Sorry, I don't speak Spanish, I don't care to read articles published in pseudoscience journals, and I have little interest in gravitomagnetism in general, and no this is not fit for Wikipedia (per WP:RS amongst other things). Headbomb {talk / contribs / physics / books} 02:48, 24 August 2012 (UTC)

More su(2) comments
OK, I've spent far too much time on this talk page already. One last remark, then. You learned, above, what the Lie algebra for su(2) was: recall, it was the commutation relation $$[L_i, L_j]=i\epsilon_{ijk}L_k$$ for i=1,2,3. You also learned that there are matrix representations of this algebra: namely matrices of dimension (2j+1)x(2j+1) that multiply and add exactly like this algebra. Aside from matrix representations, there are also differential operator representations. Consider, for example:
 * $$L_x = -i\left(z\frac{\partial}{\partial y} - y\frac{\partial}{\partial z}\right)$$

and likewise for L_y and L_z but permuted. (I stuck an i in there, to make them more physics-like, and you can put an h-bar in there too. They're just scaling constants, they can be set to anything) So, more homework:
 * 1) Verify that these obey the commutation relation above.
 * 2) They are a set of coupled first-order differential equations. Solve these equations. Hint: review my previous homework assignment.
 * 3) The set of all of the solutions to these equations form a topological manifold.  What is the common, well-known name of that manifold?  Hint: you've known this manifold for many years, you just don't know why its here.
 * 4) Points on the manifold form a topological group. What is the name of that group?

The L's here are a special case of something more general, known as a Lie derivative. There are Lie derivatives defined for all smooth manifolds; not all smooth manifolds are groups.

Anyway, be sure to read as much of the articles behind these links as you can. Some of the articles are pretty good, some suck royally. I was floored by the total suckage of structure constant, its a shame...a better article for this is adjoint endomorphism. So: think of three matrixes $$M_k$$, whose matrix elements are given by $$[M_ij]_k=\epsilon_{ijk}$$ Hmmm. That's a homework exercise too. But anyway, read them and read them twice. They should start making sense. linas (talk) 14:27, 25 August 2012 (UTC)


 * Believe me, the time you spent here is/was appreciated, although I'm unsure of how long it will take to cover and understand everything here when I don't even know what Lie algebras or topological manifolds even are. If anything, I guess that I at least learned that Lie algebras are mathematical entities, rather than a mathematical framework (like algebra is). Headbomb {talk / contribs / physics / books} 16:03, 25 August 2012 (UTC)


 * OK. I don't think any of this stuff is out of your reach, but you do have to keep banging away at it, little by little. Its central to particle physics, and no grad student in particle physics can escape it.  BTW, re: algebra: it is one of two things: (1) it is a generic framework for manipulating formulas.  (2) it is a vector space V, enhanced with a function f:VxV -> V called "multiplication". i.e. you can multiply two vectors to get a vector.  The Lie bracket is an example of this; there are many others.  I am not sure which WP page defines meaning (2), but it is there, somewhere (see algebraic structure).  Also: meaning (1) is very interesting, since it turns out lead to/require/encompass all of computer science and specifically the fascinating topics of turing machines, lambda calculus, category theory, term rewriting, universal algebra, model theory... but I digress.  linas (talk) 16:32, 25 August 2012 (UTC)


 * Well I've been banging at it for a few years now, and it peaked at "Saying screw it, I'm going to do it the long way", and I derived all spin/flavour/color/baryon wave functions. So I can go from say $$\mathbf{3} \otimes \mathbf{3} \otimes \mathbf{3}$$ to $$\mathbf{10} \oplus \mathbf{8} \oplus \mathbf{8} \oplus \mathbf{1}$$, in the sense that I take the tensor product of a three-state state by itself twice, define my operators in that new space, calculate their eigenvalues/eigenvectors, and then pick a new basis, formed of the eigenvectors, so relevant operators are diagonal, and if you plot them on a graph whose axes you choose to be the number of quarks of certain flavours, you get the 10+8+8+1 weight diagrams. But I still don't know what 10 is. Is it the set of symmetrical eigenvectors? Is it a 10x10 matrix and if so which? If it's a set of eigenvectors, or a 10x10 matrix, what does it have to do with the SU(3) group, a group whose elements are neither eigenvectors, nor 10x10 matrices? That's more or less what I'm trying to figure. Until that happens, I don't think I'll ever be able to understand group theory, because I don't even know what the terms are referring to. Headbomb {talk / contribs / physics / books} 17:13, 25 August 2012 (UTC)
 * Well, we're starting to go in circles here. You've got to re-read and rethink the above. So again: '10' here refers to a certain set of 10x10 matrices that obey the same commutation relations as the su(3) algebra (not group). These matrices are not reducible. But I already explained this above; you've just forgotten.


 * Also, in the discussion above, we have not yet encountered a group. We've not done or talked about group theory at all. It just hasn't happened, so don't keep saying or thinking "group". The only way to get from the algebra to the group is by taking the exp map.  You are capable of computing this for su(2).  You can attempt it for su(3) but you'll fail. To get farther, you would have to learn how to use some of the tools. linas (talk) 01:10, 26 August 2012 (UTC)
 * Well I'll take your word for it because you seem to understand this stuff very well, and I don't. Headbomb {talk / contribs / physics / books} 01:38, 26 August 2012 (UTC)

I mention you
Here. Eau (talk) 03:16, 27 August 2012 (UTC)

SO(3)
Apologies in advance if any of this comes across as patronizing; it isn't intended to, as I know full well that you know the field much better than I do. That said, under the assumption your recent WT:PHYS comments were at least in part a request for information:

What's meant by "the group of rotations in [3D Euclidean] space", if I understand correctly, is that elements of SO(3) add up the way changes to roll, pitch, and yaw do. For small changes, these are approximately independent of each other. For large changes, they aren't (among other things meaning it matters whether you change pitch or yaw first - operations are noncommutative). The point of Lie algeras is to quantify the rules for manipulating the elements of Lie groups in the same way you'd manipulate vectors or scalars (mapping operations such as addition or multiplication to their appropriate equivalents). Gauge theories and similar that are expressed in terms of Lie groups are ones where the equations don't change if all variables have a transformation applied that is within that Lie group (much as Newton's equations of motion remain in force if all positions are translated by some amount).

The textbook example is usually electromagnetism and U(1); to butcher what's meant by "electromagnetism" in that context, multiply all phasors in a circuit analysis diagram by $$e^{j\omega}$$ for any real $$\omega$$, and you've changed nothing about the system (just chosen a different coordinate system to express it in). The set of all possible values of $$e^{j\omega}$$ (phases in the example above), combined with the rules for adding/multiplying/etc these values, is the group U(1).

Disclaimer: I am neither a mathematician nor a physicist, I'm still struggling with a lot of the notation and concepts used in formal descriptions in physics, and I'm still trying to wrap my mind around how gauge theories are put together. This just happens to be one of the few things that I _did_ pick up, so I hope my explanation is useful to you. --Christopher Thomas (talk) 20:50, 21 August 2012 (UTC)


 * Oh don't worry about patronizing me on this. I'm a complete failure at group theory in general. In the above example, I assume j is the imaginary unit, and that you meant $$e^{j\omega}$$. What I don't "get" about things like U(1) or SO(3) or SU(6) is ... well I don't understand it to the point that I don't even know what I don't understand about it.


 * A group is a set + binary operation with certain constraints (closure, associativity, etc..). That's well and all, but when someone say EM exhibits U(1) properties, what is the set, and what is the operation, and why do we even care? Or more directly, you have things like the baryon decuplet is an irreducible reprensation of SU(3). Well SU(3) is the group of 3x3 unitary matrices with determinant 1. That's great and all. But what's the baryon decuplet have to do with 3x3 matrices? Much less special unitary matrices? How can you have a 10-dimensional representation of a group of 3x3 matrices? You'd think that, at most, you'd have a 9-dimensional representation of such a group since there are at most 9-linearly independant 3x3 matrices (forgetting those removed by unitarity and specialness). If you have a 10-dimensional represention, what is that? A 10-dimensional column-vector? A 10x10 matrix? Etc... It's this stuff that makes no sense to me, and so far I've had little success in finding good either introductory books on group theory, or books that make the link between group theory and physics. Because if you tell me something followed U(3) or O(3) or SO(3) or SU(3), I wouldn't have the slightest idea of what it implied. Headbomb {talk / contribs / physics / books} 21:13, 21 August 2012 (UTC)


 * Yes, in engineering we tend to use j instead of i for $$\sqrt{-1}$$, as I and i are normally used for electric current variables.


 * For the U(1) example I gave above, the set is all possible values of $$e^{j\omega}$$ (for real $$\omega$$), and the operation we care about for purposes of this discussion is multiplying two such values ($$e^{j\omega_a} \cdot e^{j\omega_b} \rightarrow e^{j\omega_c}$$). Multiplying two values from the set always gives you a value that's also in the set, and it even ends up being commutative for this toy example. In this case, "multiplication" does exactly what you'd expect from arithmetic multiplication, though that's not mandatory.


 * For purposes of an example, we'll consider Ohm's Law as the law of the universe we're examining, and represent voltages and currents as phasors (sine waves with magnitude and phase defined by a complex number of the form $$a \cdot e^{j\omega}$$, where $$a$$ is the amplitude and $$\omega$$ is the phase, both being real).


 * Consider a generic case: $$V_a = I_a \cdot R$$, where V and I are phasors and R is a scalar. In the phasor representation, $$V_a = a_V \cdot e^{j \omega_a}$$ and $$I_a = a_I \cdot e^{j \omega_a}$$ (the fact that R is a real scalar forces the same phase angle and the same $$e^{j \omega_a}$$ term for both V and I).


 * Saying that Ohm's Law is invariant under U(1) requires showing two things: first, that the variables we're plugging into it _have_ a component from the U(1) symmetry group, and second, showing that the equation stays the same even if I transform these components under U(1). The first part is easy to see in my example: I've set up the representation of voltage and current so that they contain explicit terms of the form $$e^{j\omega}$$, so these representations of voltage and current both have U(1) symmetry. The second part can be shown by plugging in a general version of "transformation under U(1)". In this case, we'll multiply each phasor by $$e^{j \omega_b}$$, and see what happens.


 * Doing the multiplication gives $$V_b = a_V \cdot e^{j \omega_a} \cdot e^{j \omega_b} = a_V \cdot e^{j (\omega_a + \omega_b)}$$, and $$I_b = a_I \cdot e^{j \omega_a} \cdot e^{j \omega_b} = a_I \cdot e^{j (\omega_a + \omega_b)}$$. Subbing this back into Ohm's Law gives:
 * $$V_b = I_b \cdot R$$
 * $$a_V \cdot e^{j (\omega_a + \omega_b)} = a_I e^{j (\omega_a + \omega_b)} \cdot R$$
 * Given that our original version of the equation held (with $$V_a$$ and $$I_a$$), our new version also holds, confirming the conjecture that Ohm's Law is invariant under U(1). If it wasn't invariant, there would be choices of $$e^{j \omega_b}$$ for which it would not have held.


 * The physical interpretation of all of this is that a resistor doesn't care what phase of a sine wave you put into it; it'll still work. Choice of phase reference is arbitrary in electrical engineering, as long as it's chosen consistently.


 * Showing that something is invariant under a more complicated symmetry group usually involves more than just one test, but this should be enough to demonstrate the idea.


 * As for why this is done and what the use of it all is, I've been given the impression that it makes it much simpler to express certain concepts using symmetry groups as shorthand, and that it's very handy to be able to approximate not-quite-symmetrical systems as having such symmetries, but that's where I'm still trying to understand most of the details.


 * The handwavy version of how this fits in with the Standard Model is along the lines of "the Strong Force cares that colour exists, but you can shuffle the colour of everything and it will still act the same way as long as all shufflings were the same", and "the electroweak force doesn't care how you choose to assign eigenstates at energies high enough that you can pretend the W and Z are massless". The "strong isospin" symmetry was along the lines of "you can shuffle up-ness and down-ness and things still work as long as you do it consistently", I think. But I'm probably butchering that just a tad.


 * When I'd read up on hypercharge and isospin I'd gotten the impression that strong hypercharge was more or less just strangeness and strong isospin was more or less just a way of telling you how many of the quarks were up vs down. The "symmetry" part is a prediction that particles with the same hypercharge but varying isospin will look very similar (as some portion of the Strong Force doesn't care whether it's dealing with "up" or "down" quarks, per the previous paragraph).


 * I hope this is useful to you.--Christopher Thomas (talk) 02:03, 22 August 2012 (UTC)

Gahhh. As one wikipedian to another, I feel strongly obligated to present the standard example of "the double covering of O(3) by SU(2)", which goes under many many different names, including $$\frac{1}{2}\oplus\frac{1}{2}=1$$, and the adjoint representation, the triplet, (the product of two doublets is a triplet and so its also written as: $$2\otimes 2 = 3 \oplus 1$$) and many others. These are all "secret code words" for what follows. So here goes: Let $$M=M_{ij}$$ be a 3x3 rotation matrix, any matrix that is an element of O(3). Let $$\vec x = x_i= (x,y,z)$$ be some vector, any vector, in 3D space, say, the position of something, or whatever. Thus, $$ \vec x' = x'_i = \sum_j M_{ij} x_j$$ is the rotated vector. You can think of this in one of two ways: either the space itself was rotated, or the thing siting at location x was moved. Either way, its the same (well, opposite, actually, but no mind.) Now here's the trick. Let $$\vec\tau = \tau_i$$ be the Pauli matrices, exactly as written in that article. Let me take the inner product:
 * $$\vec x \cdot \vec\tau = \left(\begin{matrix}z & x-iy\\ x+iy & -z\end{matrix}\right)$$

The above is an element of the Lie algebra su(2) (note the lower-case letters: su(2) is not a group. The group is always upper-case: its SU(2), and in fact, SU(2) = exp(su(2)) where exp is the exponential.  This is an advanced topic, but an exponential ($$e^x=1+x+x^2/2!+\cdots$$) is how you get from a tangent vector to a geodesic, in both general relativity and in quantum mechanics. But never mind, I digress). So anyway, $$\vec x \cdot \vec\tau$$ is some tangent vector in su(2). And, now, for the big reveal, the punch-line, the secret sauce: For EVERY matrix $$M\in O(3)$$  there exists a unitary matrix $$U\in SU(2)$$ such that:


 * $$\vec \tau \cdot M \cdot \vec x = U^\dagger\ (\tau\cdot x)\ U$$

holds for all x. By "unitary", I mean $$U^\dagger U = 1 = UU^\dagger$$  In fact, there are exactly TWO such U's: if one is U, then the other is U^\dagger. This TWO is the "double covering".(double cover (topology))

Now, for some more verbiage that might clarify what you've seen before: The above is central to how angular momentum is done in quantum mechanics. It is exactly the same formula as the standard, generic one from undergrad textbooks, below:
 * $$|l=1\, m\rangle = \sum_{m' m} |l=\frac{1}{2}\; m'\rangle|l=\frac{1}{2}\; m\rangle \langle \frac{1}{2}\; m'\;m'' |l=1\, m\rangle$$

where
 * $$\langle \frac{1}{2}\; m'\; m'' |l=1\, m\rangle$$

are the Clebsch-Gordon coefficients. Well, OK, to get "exactly the same formula", you actually need to multiply both sides by spherical harmonics $$Y_{lm}(x)$$ and so on, but I've run  out of steam....

OK, well, that's all I've got for now. There's lots lots more. I don't know if you will find this impenetrable or if it will be a clear-as-a-bell ah-ha moment. If its still impenetrable: may I remind you: this is all standard material treated in a variety of undergrad-level math and physics books. Its not supposed to be overwhelming. However, it is VITAL to actually do the homework exercises: simply reading is just not enough. You've got to crank on the pencil while its touching paper. The Clebsch-Gordon stuff can be found in chemistry and QM books. The stuff about matrices can be found in undergrad books on group theory and linear algebra. If you can't find the right book, then find a university library with open stacks, get in there, and start picking books off the shelf. Find one that is at your level, and start reading. Hope this helped. But really, if you want to get anywhere in physics, you've absolutely got to not just understand, but master this material, its absolutely central to just about everything.linas (talk) 16:00, 23 August 2012 (UTC)


 * I mean, physicists talk in code-words: the above, and more, is often encoded in just one or two words, and when you read that one word, all of the above is supposed to come flooding into your mind, and you are supposed to just know it. These code-words include "adjoint" "fundamental" "2"-rep  "doublet" "triplet" (which is a synonym for "vector"), and many many more: for SU(3) its 3 and 3-bar, while for SL(2,C) its "special relativity", since special relativity uses a formula almost exactly the same as that above, but for spinors and for O(3,1).  And the guys who launch spacecraft know about SL(2,R) and O(4) since these are the orbital parameters for planets and moons, and spacecraft...  All of these have a very similar, generic set of formulas they all share in common, with slightly different details. Its a real zoo. Gahhh.linas (talk) 16:18, 23 August 2012 (UTC)
 * The SU(3) version of the above is $$3\otimes \overline 3 = 8\oplus 1$$ where this formula is short-hand for something that looks a lot like the above, but requires 8 by 8 matrices, and junk. And finally $$3\otimes 3\otimes 3=10\oplus 8\oplus 8 \oplus 1$$ for the baryons: again, its secretly the exact same formula as above, but with 10x10 matrices, and etc.  The details become a lot trickier/nastier, though. linas (talk) 16:35, 23 August 2012 (UTC)
 * BTW, Homework exercise: compute $$exp(\vec\tau\cdot\vec x)$$ by hand. You can do it. Hint: exp(x)=1+x+x^2/2+... linas (talk) 16:26, 23 August 2012 (UTC)


 * Well that gives
 * $$\mathbb{I} + \left(\vec\tau\cdot\vec x\right) + \vert x \vert^2\mathbb{I}/2 + \vert x \vert^2 \left(\vec\tau\cdot\vec x\right) /6 + \vert x \vert^4 \mathbb{I}/24 + ... $$
 * but I don't really see how that's interesting. Headbomb {talk / contribs / physics / books} 19:04, 27 August 2012 (UTC)

Break
That's a lot to take in all at once. I'll give it a deeper read (probably over the week-end, assuming I have time). Maybe I can talk a bit at how I'm approaching SU(N) in general, and maybe you can fill in some gaps. Bear in mind my terminolgy may not be accurate. Taking SU(2) spin symmetries as an example, then generalizing to SU(3).

SU(2)
Since for spin SU(2), there are two fundamental states, say u & d, so picking those states as a basis makes a lot of sense. In general, a state will be described by
 * $$ \vert V \rangle = a \vert u \rangle + b \vert d \rangle$$

with the basis of this vector space being
 * $$ \alpha = \left\{ \vert u \rangle, \vert d \rangle \right\}$$

where
 * $$ \left[ \vert u \rangle \right]_\alpha =

\left( \begin{matrix} 1 \\ 0 \end{matrix} \right);~ \left[ \vert d \rangle \right]_\alpha = \left( \begin{matrix} 0 \\ 1 \end{matrix} \right) $$ I suspect very strongly the basis $$\alpha$$ (aka the set of vectors) to be 2.

Now if we want to construct operators that change $$\vert u \rangle$$ to $$\vert d \rangle$$ and vice versa, you need to define them as

\left[ \hat S_{d\rarr u} \right]_\alpha = \left[ \hat S_{+} \right]_\alpha = \hbar \left( \begin{matrix} 0 & 1\\ 0 & 0 \end{matrix} \right) $$ and

\left[ \hat S_{u\rarr d} \right]_\alpha = \left[ \hat S_{-} \right]_\alpha= \hbar \left( \begin{matrix} 0 & 0\\ 1 & 0 \end{matrix} \right) $$

Which can, for convenience, be written as

\left[ \hat S_{\pm} \right]_\alpha= \frac{\hbar}{2} \left( \begin{matrix} 0 & 1\\ 1 & 0 \end{matrix} \right) \pm i \frac{\hbar}{2}\left( \begin{matrix} 0 & -i\\ i & 0 \end{matrix} \right) $$

Now since $$\vert u \rangle$$ and $$\vert d \rangle$$ are eigenvectors, they have eigenvalues. Whatever they are, their eigenvalue operator will be diagonal. Since we want them to have the eigenvalues $$\pm \frac{\hbar}{2}$$, then it follows that the eigenvalue operator is

\left[ \hat S_z \right]_\alpha= \frac{\hbar}{2} \left( \begin{matrix} 1 & 0\\ 0 & -1 \end{matrix} \right) $$

Or, using Pauli matrices notation
 * $$\left[ \hat S_\pm \right]_\alpha= \frac{\hbar}{2} \left(\sigma_1 + i \sigma_2\right)$$

and
 * $$\left[ \hat S_x \right]_\alpha= \frac{\hbar}{2} \sigma_1$$
 * $$\left[ \hat S_y \right]_\alpha= \frac{\hbar}{2} \sigma_2$$
 * $$\left[ \hat S_z \right]_\alpha= \frac{\hbar}{2} \sigma_3$$

and
 * $$\left[ \hat \mathbf{S}^2 \right]_\alpha= \left[ \hat S_x \cdot \hat S_x + \hat S_y \cdot \hat S_y + \hat S_z \cdot \hat S_z \right]_\alpha= \frac{3}{4}\hbar^2 \left(

\begin{matrix} 1 & 0\\ 0 & 1 \end{matrix} \right)$$

If this were isospin, we drop the hbars, and change S for I everywhere.
 * OK, This is the fundamental representation of the Lie algebra  su(2), and not of the Lie group SU(2).  It has a representation for every positive integer (which, in this particular case, is the dimension of the basis, as you note at the very begining).  Also, there is no h-bar in the math; it only shows up for physics.  S^2 is called a Casimir invariant. linas (talk) 04:39, 25 August 2012 (UTC)


 * Well I guess that's one thing somewhat clarified. I just thought su(2) and SU(2) was the same thing, but written in a different convention. I guess I'll have to pay some attention to that and re-read some stuff. So now... is $$\mathbf{2} = \alpha = \left\{ \vert u \rangle, \vert d \rangle \right\}$$ [aka the set of kets themselves], or is $$\mathbf{2} = \left\{ \left[ \vert u \rangle \right]_\alpha, \left[ \vert d \rangle \right]_\alpha \right\}$$ [aka the set of representations of kets in the $$\alpha$$ basis], or something else? Headbomb {talk / contribs / physics / books} 05:20, 25 August 2012 (UTC)


 * Yes. The 2 also refers to 2=2j+1 where j=1/2 is the spin. The casimir invariant s^2 is equal to j(j+1).  Note that the last two sentences do not generalize to su(3); they work only for su(2). The Casimir invariant is one way to tell apart different representations.  The last statement does generalize to su(n) although the invariant itself is not equal to j(j+1) in general.  (Oh, notice the casimir invariant is just the identity matrix, times a constant ... that is what makes it invariant)linas (talk) 05:50, 25 August 2012 (UTC)

SU(3)
Now if this is an SU(3) system, then there are three fundamental vectors say u, d, and s. In general, a state will be described by
 * $$ \vert V \rangle = a \vert u \rangle + b \vert d \rangle + c \vert s \rangle$$

with the basis of this vector space being
 * $$ \beta = \left\{ \vert u \rangle, \vert d \rangle, \vert s \rangle \right\}$$

where
 * $$ \left[ \vert u \rangle \right]_\beta =

\left( \begin{matrix} 1 \\ 0 \\ 0 \end{matrix} \right);~ \left[ \vert d \rangle \right]_\beta = \left( \begin{matrix} 0 \\ 1 \\ 0 \end{matrix} \right);~ \left[ \vert s \rangle \right]_\beta = \left( \begin{matrix} 0 \\ 0 \\ 1 \end{matrix} \right) $$ I suspect very strongly the basis $$\beta$$ (aka the set of vectors) to be 3. Following the same arguments as for isospin SU(2), we need three pairs of operators that can switch to and from u to d, u to s, and d to s. In this basis,

\left[ \hat S_{d \rarr u} \right]_\beta = \left[ \hat S^{ud}_{+} \right]_\beta = \left( \begin{matrix} 0 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{matrix} \right) $$
 * $$ \left[ \hat S_{u \rarr d} \right]_\beta = \left[ \hat S^{ud}_{-} \right]_\beta = \left(

\begin{matrix} 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{matrix} \right); $$ and so on, which can for convenience be written as

\left[ \hat S^{ud}_{\pm} \right]_\beta = \frac{1}{2} \left( \begin{matrix} 0 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{matrix} \right) \pm i \frac{1}{2}\left( \begin{matrix} 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{matrix} \right) $$ and so on. Skipping directly to Gell-Mann matrices notation, these ladder operators can be written as
 * $$\left[ \hat S^{ud}_\pm \right]_\beta = \frac{1}{2} \left(\lambda_1 + i \lambda_2\right)$$
 * $$\left[ \hat S^{us}_\pm \right]_\beta = \frac{1}{2} \left(\lambda_3 + i \lambda_4\right)$$
 * $$\left[ \hat S^{du}_\pm \right]_\beta = \frac{1}{2} \left(\lambda_5 + i \lambda_6\right)$$

Now, because we know the properties of quarks, we can build a bunch of different operators. The charge number operator is, for example,
 * $$\left[ \hat Q \right]_\beta = \frac{1}{3} \left(

\begin{matrix} +2 & 0 & 0\\ 0 & +2 & 0\\ 0 & 0 & -1 \end{matrix} \right) $$ and if you want to define yourself Upness (U'), Downness (D'), Strangeness (S'), and Baryon number (B) operators in this basis, they would be

\left[ \hat U' \right]_\beta = \left( \begin{matrix} +1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{matrix} \right); $$

\left[ \hat D' \right]_\beta = \left( \begin{matrix} 0 & 0 & 0\\ 0 & +1 & 0\\ 0 & 0 & 0 \end{matrix} \right); $$

\left[ \hat S' \right]_\beta = \left( \begin{matrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & +1 \end{matrix} \right)

$$

and baryon number, isospin projection and hypercharge would be



\left[ \hat B \right]_\beta = \frac{1}{3}\left( \left[ \hat U' \right]_\beta + \left[ \hat D' \right]_\beta + \left[ \hat S' \right]_\beta \right) = \left( \begin{matrix} +1 & 0 & 0\\ 0 & +1 & 0\\ 0 & 0 & +1 \end{matrix} \right) = \mathbb{I} ;~ $$

\left[ \hat I_z \right]_\beta = \frac{1}{2}\left(\left[ \hat U' \right]_\beta - \left[ \hat D' \right]_\beta \right) = \frac{1}{2} \left( \begin{matrix} +1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 0 \end{matrix} \right) = \frac{1}{2} \lambda_3 ;~ $$

\left[ \hat Y \right]_\beta = \frac{1}{3} \left[ \hat U'\right]_\beta + \left[ \frac{1}{3}\hat D' \right]_\beta - \left[ \frac{2}{3}\hat S' \right]_\beta = \frac{1}{3} \left( \begin{matrix} +1 & 0 & 0\\ 0 & +1 & 0\\ 0 & 0 & -2 \end{matrix} \right) = \frac{1}{3} \lambda_8 = \mathbb{I} - \frac{2}{3}\lambda_3 $$ and whether you work with
 * $$\hat Q = +\frac{2}{3} \hat U' - \frac{1}{3} \hat D' - \frac{1}{3} \hat S'$$

or
 * $$\hat Q = \hat I_z + \frac{1}{2} \left (\hat B + \hat S \right)$$

or
 * $$\hat Q = \hat I_z + \frac{1}{2} \hat Y $$

really is up to you. Headbomb {talk / contribs / physics / books} 02:03, 24 August 2012 (UTC)
 * OK, above looks more or less right. Again,. its lowercase su(3) the algebra, not upper-case SU(3) the group. When you get deeper, you will discover that su(3) is significantly more complicated than su(2) in many ways.  But at this level, all is well. linas (talk) 05:05, 25 August 2012 (UTC)

Composite systems
Now I'll just do the $$\mathbf{2} \otimes \mathbf{2}$$ stuff, because this is fairly tedious, and I don't feel like dealing with 8x8 matrices ($$\mathbf{2} \otimes \mathbf{2} \otimes \mathbf{2}$$), or 27x27 matrices ($$\mathbf{3} \otimes \mathbf{3} \otimes \mathbf{3}$$).

So now if we have a composite system of two identical particles, the relevant vector space will be generated by the tensor product of the single-particle space by itself. This is a space, as far as I understand it, whose basis is generated by the direct product of the set of basis vectors from the single-particle space by itself. AKA
 * $$\vert uu \rangle = \vert u \rangle \otimes \vert u \rangle $$
 * $$\vert ud \rangle = \vert u \rangle \otimes \vert d \rangle $$
 * $$\vert du \rangle = \vert d \rangle \otimes \vert u \rangle $$
 * $$\vert dd \rangle = \vert d \rangle \otimes \vert d \rangle $$

and a vector in this space will be described by
 * $$\vert W \rangle = a \vert uu \rangle + b \vert ud \rangle + c \vert du \rangle +d\vert dd \rangle$$

with the basis
 * $$\delta = \left\{ \vert uu \rangle, \vert ud \rangle, \vert du \rangle, \vert dd \rangle \right\} $$

where

\left[ \vert uu \rangle \right]_\delta = \left( \begin{matrix} 1\\ 0\\ 0\\ 0 \end{matrix} \right) ;~ \left[ \vert ud \rangle \right]_\delta = \left( \begin{matrix} 0\\ 1\\ 0\\ 0 \end{matrix} \right) ;~ \left[ \vert du \rangle \right]_\delta= \left( \begin{matrix} 0\\ 0\\ 1\\ 0 \end{matrix} \right) ;~ \left[ \vert dd \rangle \right]_\delta = \left( \begin{matrix} 0\\ 0\\ 0\\ 1 \end{matrix} \right) $$

In this basis, you can define the spin operators for individual particles, even if the system is composite. For example,

\left[ \hat S_{1, z} \right]_\delta = \frac{\hbar}{2} \left( \begin{matrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -1\\ \end{matrix} \right); ~ \left[ \hat S_{2, z} \right]_\delta= \frac{\hbar}{2} \left( \begin{matrix} 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & -1\\ \end{matrix} \right) $$ and if you do the total spin projection
 * $$\hat S_z = \hat S_{1, z} + \hat S_{2, z}$$

everything works fine in this basis, since adding diagonal operators also yields a diagonal operator, aka

\left[ \hat S_z \right]_\delta= \hbar \left( \begin{matrix} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & -1\\ \end{matrix} \right) $$ However, $$ \hat \mathbf{S}^2$$ isn't so fun, as it's rather complex, namely
 * $$ \hat \mathbf{S}^2 = ( \hat \mathbf{S}_1 + \hat \mathbf{S}_2)^2$$

which if you go through the long hassle, becomes something like

\left[ \hat \mathbf{S}^2 \right]_\delta = \hbar \left( \begin{matrix} 2 & 0 & 0 & 0\\ 0 & 1 & 1 & 0\\ 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 2\\ \end{matrix} \right) $$ This operator isn't diagonal, meaning the $$\delta$$ basis is not a basis formed of eigenvectors of both $$\hat S_z$$ and $$\hat \mathbf{S}^2$$. So you calculate the eigenvectors of $$\left[ \hat \mathbf{S}^2 \right]_\delta$$, and this gives you
 * $$\vert uu \rangle $$
 * $$\frac{1}{\sqrt{2}}\left(\vert ud \rangle + \vert du \rangle\right)$$
 * $$\vert dd \rangle $$

which are symmetric under exchange of u and d and
 * $$\frac{1}{\sqrt{2}}\left(\vert ud \rangle - \vert du \rangle\right)$$

which is antisymmetric under exchange of u and d.

Taking a new basis, $$\eta$$ consisting of
 * $$\eta = \left\{ \vert uu \rangle, \frac{1}{\sqrt{2}} \left( \vert ud \rangle + \vert du \rangle \right), \vert dd \rangle, \frac{1}{\sqrt{2}} \left( \vert ud \rangle - \vert du \rangle \right) \right\}$$

where

\left[ \vert uu \rangle \right]_\eta = \left( \begin{matrix} 1\\ 0\\ 0\\ 0 \end{matrix} \right) ;~ \left[ \frac{1}{\sqrt{2}}\left(\vert ud \rangle + \vert du \rangle \right) \right]_\eta= \left( \begin{matrix} 0\\ 1\\ 0\\ 0 \end{matrix} \right) ;~ \left[ \vert dd \rangle \right]_\eta = \left( \begin{matrix} 0\\ 0\\ 1\\ 0 \end{matrix} \right) ;~ \left[ \frac{1}{\sqrt{2}}\left(\vert ud \rangle + \vert du \rangle \right) \right]_\eta = \left( \begin{matrix} 0\\ 0\\ 0\\ 1 \end{matrix} \right) $$ In THIS basis, both operators are diagonal, namely

\left[ \hat S_z \right]_\eta = \hbar \left( \begin{matrix} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 0\\ \end{matrix} \right) $$ and

\left[ \hat \mathbf{S}^2 \right]_\eta= \hbar^2 \left( \begin{matrix} 2 & 0 & 0 & 0\\ 0 & 2 & 0  & 0\\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 0\\ \end{matrix} \right) $$

Now I think the set of symmetrical eigenvectors is 3, and I think the set of antisymetrical eigenvectors is 1, but as above, I am unsure of this. However, what these have to do with "irreducible representations of SU(2)" I have no idea, since I don't know what in the world is a representation of SU(2).


 * OK, so you have just discovered that $$2\otimes 2 = 3\oplus 1$$: its a triplet (physics) plus a singlet (physics). Here, $$\oplus$$ is the direct sum, specifically, the direct sum of matrices. A representation is a set of matrices that behave the same way as the abstract algebra.  In this case, the abstract algebra is (lower case) su(2) which may be written as $$[L_1, L_2] = L_3$$ (plus the 5 other various permutations: $$[L_i, L_j] = \epsilon_{ijk}L_k$$ where the $$\epsilon_{ijk}L_k$$ are the structure constants. )  Here, the $$L_i$$ have no meaning; they're just abstract squiggles on a page: that are NOT matricies, or anything else, they're just symbols obeying some rules, the rules of an algebra.  So, way up at the beginning, you discovered that the Pauli matricies behave exactly like this algebra (after multiplying by i on both sides.)  Here, you discover some 4x4 matricies that also behave like this algebra: they represent it.  Then you discover that your 4x4 matricies can be reduced to a direct sum of a 3x3  and a 1x1 matrix.  The 3x3 cannot be further reduced, it is irreducible.  In physics, the 3x3 rep is called the spin-1 rep. Which, happily, is the rep of o(3), the rotation matrices for 3D space.
 * Notice that your 3x3 matrices are symmetric: this is a pre-requisite for them being the generators of an orthogonal matrix (since the 3D space rotation matrices are orthogonal). Note that the 2x2 matricies were not symmetric (or anti-symmetric) they're just blah.  This is one way in which representations differ from the abstract algebra they are representing.  The algebra doesn't say anything about having to be orthogonal or symmetric. linas (talk) 05:01, 25 August 2012 (UTC)
 * And here's pretty much were everything breaks down. I have results that strongly hint of something related to group theory, but I don't know what exactly. Before 2 was a fundamental representation of su(2), but if you take their direct product, you end up with irreducible representations of SU(2), which are somehow related to either $$\left[ \hat S_z \right]_\eta$$ or $$\left[ \hat \mathbf{S}^2 \right]_\eta$$, or both, or something else. Headbomb {talk / contribs / physics / books} 05:43, 25 August 2012 (UTC)


 * OK, so the above is still a representation of lower-case su(2) the algebra. To get the group you must use exp(x)=1+x+x^2/2+... which is a whole different topic altogether. More about this later. The irreducible representations of su(2) are for spins j=0, j=1/2, j=1, j=3/2, etc. and these are 2j+1 dimensional  i.e. require (2j+1)x(2j+1) matrices.  For an irreducible representation, you will always have $$\mathbf{S}^2 = j(j+1) \mathbf{I} $$ where I is the identity matrix. (I omitted the hat and the brackets and stuff, since I'm not sure what you mean by them; that notation is not central to whats really happening) This is how you can tell apart the different spins: they have different values for the casimir invariant. Also, you will always have that $$\mathbf{S}_z$$ has 2j+1 eigenvalues, and these eigenvalues are always in the range m=-j-1/2, -j+1/2, ... +j+1/2 That's all there is i.e. this is the grand-total of generic properties of su(2)  Absolutely none of this holds for su(3) which consists of a different bag of tricks.  There is a general pattern, discernible if you study su(n), things like dynkin diagrams, and root systems, and stuff. But its a good bit more complicated. Never understood the depths, myself. linas (talk) 06:05, 25 August 2012 (UTC)


 * BTW, when you see the baryons and mesons drawn on a hexagon (and the center dot is doubled-up) or the big triangle with ten dots, for su(3) (not su(2)!) that shape is actually the root system for su(3) (scroll down to the middle of the article). The root system for su(2) is too simple, and so it is never drawn.  As you can see, towards the bottom of that article, the root systems get complicated fast. linas (talk) 06:17, 25 August 2012 (UTC)


 * I basically use the convention $$\left [ ... \right]_x$$ to mean "... as represented in the basis x", and I put hats on operators. This is very useful when working in multiple bases, or when you want to distinguish operators from their eigenvalues and associated quantum numbers. For example, in
 * $$\hat \mathbf{S}^2 \vert \Psi \rangle= S^2 \vert \Psi \rangle = s(s+1)\hbar^2 \vert \Psi \rangle $$
 * $$\hat \mathbf{S}^2$$ is the operator, $$S^2$$ is it's eigenvalue, and $$s$$ is its quantum number, whereas in
 * $$\hat S_z \vert \Psi \rangle= S_z \vert \Psi \rangle = s_z \hbar\vert \Psi \rangle $$
 * $$\hat S_z $$ is the operator, $$S_z $$ is it's eigenvalue, and $$s_z$$ is its quantum number.
 * For the $$\left [ ... \right]_x$$, well consider the two representations of $$\hat S_z$$ in this section. There was two relevant bases, delta and eta, and the matrix-representations of the operators were different in each basis. Aka
 * $$\left[ \hat S_z \right]_\delta= \hbar \left(

\begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ \end{matrix} \right);~ \left[ \hat S_z \right]_\eta= \hbar \left( \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ \end{matrix} \right);~ $$
 * Headbomb {talk / contribs / physics / books} 06:26, 25 August 2012 (UTC)

spin 3/2
Doing $$\mathbf{2} \otimes \mathbf{2} \otimes \mathbf{2}$$ is basically doing the same thing, except you deal with 8x8 matrices, and you end up with a 4 + 2 + 2, where the 4 symmetrical eigenvectors are
 * $$\vert uuu \rangle $$
 * $$\frac{1}{\sqrt{3}} \left( \vert uud \rangle + \vert udu \rangle + \vert duu \rangle \right)$$
 * $$\frac{1}{\sqrt{3}} \left( \vert ddu \rangle + \vert dud \rangle + \vert udd \rangle \right)$$
 * $$\vert ddd \rangle $$

and you pick two out of the following three sets for the partially antisymmetric eigenvectors
 * $$\frac{1}{\sqrt{2}} \left( \vert udu \rangle - \vert duu \rangle \right); ~ \frac{1}{\sqrt{2}} \left( \vert dud \rangle - \vert udd \rangle \right)$$
 * $$\frac{1}{\sqrt{2}} \left( \vert uud \rangle - \vert duu \rangle \right); ~ \frac{1}{\sqrt{2}} \left( \vert ddu \rangle - \vert udd \rangle \right)$$
 * $$\frac{1}{\sqrt{2}} \left( \vert uud \rangle - \vert udu \rangle \right); ~ \frac{1}{\sqrt{2}} \left( \vert uud \rangle - \vert dud \rangle \right)$$

Headbomb {talk / contribs / physics / books} 14:29, 24 August 2012 (UTC)


 * OK The simpler way to get this is $$2\otimes 2\otimes 2 = 2\otimes (3\oplus 1) = (2\otimes 3) \oplus (2\otimes 1)= (4\oplus 2) \oplus 2$$ The 4x4 matrices describe spin-3/2. In general, spin-j requires (2j+1)x(2j+1) matrices to describe it. The various $$\pm\frac{1}{\sqrt 3}$$ coefficients are the Clebsch-Gordon coefficients which describe how to add and subtract spins in general.  I promise not to confuse you by ever mentioning 3-j symbols or 6-j symbols which are something else again.  (However, I will stop to brag, and state that, in order to correctly compute the microwave emission spectrum of HO2 a highly unstable radical found in the upper atmosphere, up with the ozone, you have to take into account the coupling of the spin-1/2 electron sitting on that lonely hydrogen, and couple it to the rotational state of the molecule. This requires the use of a 15-j symbol to be computed correctly, and this symbol has never been published.  There is one, very very rare book, that discusses 12-j symbols ...)


 * Anyway, it is now time to start thinking about the group. Again: homework: $$exp(i\vec\sigma\cdot\vec\theta)$$ linas (talk) 05:23, 25 August 2012 (UTC)
 * Assuming, as above that the sigma vector is a vector formed of the Pauli matrices, then that ends up being something like ...
 * $$\mathbb{I} + i \left( \vec \sigma \cdot \vec \theta \right) - \vert \theta \vert^2 \mathbb{I}/2 - i \vert \theta \vert^2 \left( \vec \sigma \cdot \vec \theta \right) / 6 + \vert \theta \vert^4 \mathbb{I} /24 + ...$$
 * which, as above, I don't see what's particularly interesting about that. Headbomb {talk / contribs / physics / books} 19:15, 27 August 2012 (UTC)


 * I don't quite get how's that simpler because it seems pretty hard to go from 2 to the wavefunctions $$\frac{1}{\sqrt{2}} \left( \vert uud \rangle - \vert udu \rangle \right)$$ and $$\frac{1}{\sqrt{2}} \left( \vert uud \rangle - \vert dud \rangle \right)$$ [for example], or to know if 2 refers to those previous wavefunctions, or to $\vert u \rangle $ and $\vert d \rangle $.


 * As for bragging, I don't know if you ever derived the $$\Lambda^0$$ and $$\Sigma^0$$ flavour wavefunctions from scratch, but it ain't very fun. You can do it by building 27x27 matrices and diagonalizing them, or by using symmetry arguments based on the Pauli principle, but either way is pretty tedious. Luckily, non-uds baryons all have uds analogues, so once you got the light baryon wavefunctions, getting heavy baryon wavefunctions is just a matter of substituting uds by ijk where $$i \neq j \neq k \in \left\{u, d, s, c, b, t \right\}$$ and making sure k is the heaviest of the three quarks. And this has some pretty interesting implications which should result in two articles in the next two or so months. One concerning a new baryon nomenclature, and another on the generalization of isospin and mass groups. Headbomb {talk / contribs / physics / books} 06:05, 25 August 2012 (UTC)


 * Its simpler for su(2), cause you just whip out the required clebsh-gordon coefficients, and presto, you're done. There is very little calculation to do. The coefficients themselves are published in tables of various sorts.  But you also seem to be trying to talk about su(3) at the same time, and that is something else.  I am not aware of any simple way of computing anything for su(3), and yes, it is tedious to a whole new level.  I know that the ideas of clebsh-gordon coeffs generalize, because I know that they have been generalized to symmetric spaces, and that their inverses the analog to the spherical harmonics are the hypergeometric functions, (which obey many truly magical identities).  But I don't know of any simple su(3) devices that could be used. linas (talk) 06:30, 25 August 2012 (UTC)


 * p.s. be sure that you are not mixing up su(2) and su(3) in your calculations, this is probably a great source of headache!! linas (talk) 06:30, 25 August 2012 (UTC)
 * Oh there is no such calculations. Or rather whatever calculation was involved, Okubo and Gell-Mann did all the work for me and I'm just generalizing their formula to generalized isospin and generalized mass groups. With it, I can pretty much predict the mass of all baryons from quark masses, or find quark masses from the mass of baryons. And I can also rule out this reported mass for the $\Xi_{cc}^+$ on a theoretical basis, which is too low about about 200 MeV. Headbomb {talk / contribs / physics / books} 06:41, 25 August 2012 (UTC)
 * OK, realize that the Gell-Mann Okubo formula is approximate, its kind of like the Rydberg formula, but for quarks. The fact that it worked was one of the key early pieces of evidence for quarks. However, just like the Rydberg formula is not central to the theory of atoms, but rather, a kind of side-effect to the full QM theory, so also the Gell-Mann Okubo formula should somehow, someday, emerge from a complete theory of QCD, if and when that arrives.  The article should say the above, but it doesn't.  Oh well.  linas (talk) 14:27, 25 August 2012 (UTC)
 * It's approximate yeah, if I recall Okubo derived it by perturbation theory (both to first and second order), and it has a bunch of free parameters. But if you know the masses of a few baryons, you can predict the masses of the others within 10-20 MeV (or get the 1S quark masses in fantastic agreement with the PDG's) using only basic algebra. Also that GM-O article is just in a sad state. The actual formula developed by Okubo is not even showed, only the Gell-Mann results are there. Headbomb {talk / contribs / physics / books} 15:46, 25 August 2012 (UTC)

Discussion
I have started a discussion at Talk:1 Arietis in an attempt to address a dispute concerning your actions. RJH (talk) 18:12, 27 August 2012 (UTC)

Lie group elements
Reply to post on my talk page. Lets look first at $$U = exp(i \sigma \cdot \theta) $$. So: notice these terms alternate between I and $$\sigma \cdot \theta$$. Regroup the terms, so you group together all those multiplying I and all those multiplying sigma dot theta. Compare to expansions for sin x and cos x ... you should see something that looks like Euler's formula, but strangely different. The objects $$exp(i \sigma \cdot \theta)$$ belong to the Lie group SU(2). You can verify that they are unitary viz: $$U^\dagger U = 1$$. After that, its trivial to verify that the product of two different U's is also unitary, so that they form a group.

This holds in general for any SU(N) (although the euler-like formula is special, only for SU(2)). That is, for general SU(N), one has $$U=exp(i F \cdot \theta) $$ for F the generators in the Lie algebra i.e. where $$F \cdot \theta$$ is some arbitary vector  ($$\scriptstyle F \cdot \theta$$ is a vector?)  in the Lie algebra. Its in fact a geodesic. See exponential map for details.

Finally, all SU(N) (all Lie groups) are smooth differentiable manifolds. You already know that U(1) is a circle, this is Euler's formula, again. I won't tell you what SU(2) is, cause that's the next homework. But I'll give you a huge hint... write $$U = a + i\vec b \cdot \sigma$$ for real number a and real-number-valued vector b, and compute $$a^2 + b_1^2 + b_2^2 + b_3^2$$  After this, we're almost done. I just want to show you one more trick, relating this to rotations in 3D space. After that, I've sort of exhausted my imagination. There are many many more things, but I can't say that any one is any more important than the next.

FWIW, If I change notation, can call vector b to vector pi, and rename a as sigma, you'd get the sigma model which is a model for the pion-nucleon interaction, with the pions being a local SU(2) isospin symmetry (i.e. the pions being the gauge particles of isospin symmetry). This model was developed before quarks, and yet is remarkably accurate for low energies. It's way cool in many ways: the nucleon turns out to be nothing more than a stable cloud of pions, known as the Skyrmion. Sadly the article on sigma model is perfectly horrid, gives no idea at all of what the model is about, or why. The non-linear sigma model is a little better but its too technical. The other interesting thing is that mathematicians have made some progress in showing that the low-energy limit of QCD is the Skyrmion... The only exact result is from Dan Friedan, and they are far more limited in nature than what I describe. It requires understanding cohomology and algebraic topology and etc. linas (talk) 00:02, 28 August 2012 (UTC)


 * Alright, well we have
 * $$\exp(i\vec\sigma\cdot\vec\theta) = \cos(\vert \theta \vert) \mathbb{I} + i \frac{\sin (\vert \theta  \vert)}{\vert \theta  \vert}(\vec\sigma\cdot\vec\theta) =  \cos(\vert \theta  \vert) \mathbb{I} + i~\mathrm{sinc} (\vert \theta \vert)(\vec\sigma\cdot\vec\theta) $$
 * So, letting
 * $$ a = \cos (\vert \theta \vert)$$
 * and
 * $$ \vec b = \mathrm{sinc} (\vert \theta \vert) \vec \theta $$
 * computing $$a^2 + b_1^2 + b_2^2 + b_3^2$$ yields
 * $$\left[ \cos (\vert \theta \vert) \right]^2 + \left[ \mathrm{sinc} (\vert \theta \vert) \right]^2  \vert \theta \vert^2 = \left[ \cos (\vert \theta  \vert) \right]^2 + \left[ \sin (\vert \theta \vert) \right]^2 = 1$$
 * and... ? Headbomb {talk / contribs / physics / books} 19:05, 28 August 2012 (UTC)
 * And $$a^2 + b_1^2 + b_2^2 + b_3^2 = 1$$ is the shape of what?
 * Some kind of 4-dimensional sphere of "radius" 1? Headbomb {talk / contribs / physics / books} 23:08, 28 August 2012 (UTC)

Yes. Not "some kind", there is only one sphere for each dimension. Its 3-dimensional, not 4; the above eqn merely embeds it in 4-dimensional space. In symbols, its $$S_3$$. One may show its a sphere without having to embed it inside of 4D. To summarize, the group SU(2) is a manifold, and that manifold is a 3-sphere. Equivalently, SU(2) is a topological group: the group actions are continuous functions. (differentiable, even!) The other SU(N)'s are not spheres, although they are homogenous spaces and symmetric spaces: they look the same no matter which direction one goes in. *** I believe they're both homogeonous and symmetric, but I may be stumbling over the correct technical definition. linas (talk) 16:49, 29 August 2012 (UTC)
 * Oh, and one cool thing about the 3-sphere is that it is a Hopf bundle. Kind of freaky. The 11-sphere is too, and in particular, $$S_{11}=S_7\oplus S_4$$ and through some hand-waving, one says that S_4 is like our 3+1-dimensional space-time. This is why the early supergravity theories, specifically Kaluza-Klein theory were 11-dimensional. (The original Kaluza-Klein, worked by Einstein, tried to unify 4D spacetime with 1D electromagnetism.) linas (talk) 16:58, 29 August 2012 (UTC)

James Arthur at British Journal of Educational Studies
Hi, he doesn't seem to be any of the persons in the dab page, see http://www.birmingham.ac.uk/schools/education/staff/profile.aspx?ReferenceId=5014. The best thing would probably be to give the name a dab, but I'm not sure what ("educationalist" sounds a bit weird to me). --Guillaume2303 (talk) 15:11, 29 August 2012 (UTC)
 * That's pretty much my feelings. I put the dn there because I though you might come up with something, and I was debating "educationalist" myself, but also held back because it sounded weird. Headbomb {talk / contribs / physics / books} 15:12, 29 August 2012 (UTC)
 * Education researcher maybe? Headbomb {talk / contribs / physics / books} 15:13, 29 August 2012 (UTC)
 * Perfect. --Guillaume2303 (talk) 15:34, 29 August 2012 (UTC)

A barnstar for you!

 * Many thanks! Headbomb {talk / contribs / physics / books} 22:41, 29 August 2012 (UTC)