Wikipedia:Reference desk/Archives/Mathematics/2011 November 25

= November 25 =

Determinant Proof
I've seen in some places the determinant defined by solely the following two properties: det(AB)=det(A)det(B) and det(I)=1. Here, we are assuming A and B to be compatible and I to be the identity matrix. I like this definition because it intuitively relates how the determinant describes the area/volume ratios of a linear transformation. Now, one property of determinants that I am having trouble proving via this definition is the following: the determinant behaves as a linear function of a given row if the other rows stay the same. In other words, prove that $$\begin{vmatrix}ta & tb \\ c & d\end{vmatrix}=t\begin{vmatrix}a & b \\ c & d\end{vmatrix} \text{ where } a,b,c,d,t \in \mathbb R$$ and $$\begin{vmatrix}a+a' & b+b' \\ c & d\end{vmatrix}=\begin{vmatrix}a & b \\ c & d\end{vmatrix}+\begin{vmatrix}a' & b' \\ c & d\end{vmatrix} \text{ where } a,a',b,b',c,d \in \mathbb R$$. Any pointers? Thank you! — Trevor K. — 00:48, 25 November 2011 (UTC)  — Preceding unsigned comment added by Yakeyglee (talk • contribs)
 * You'll need more than these two properties to characterize the determinant.  Sławomir Biały  (talk) 01:26, 25 November 2011 (UTC)
 * What if we said also that exchanging the rows of a matrix flips the sign of the determinant? — Trevor K. —  02:34, 25 November 2011 (UTC)  — Preceding unsigned comment added by Yakeyglee (talk • contribs)
 * Also, this source cites Charles Cullen’s Matrices and Linear Transformations to define the determinant in this manner. I'd ideally like to be able to prove other common properties of the determinant using just those two criteria, but if it becomes necessary to use that third one I suggested, then that's fine as well.  — Trevor K. —  02:37, 25 November 2011 (UTC)  — Preceding unsigned comment added by Yakeyglee (talk • contribs)
 * I don't think the definition works, since for any k if you define det′(A)=(det(A))k you still get det′(AB)=det′(A)det′(B) and det′(I)=1. In particular this works for k=0 so det′(A)=1 for all A satisfies the conditions. This type of definition wouldn't really save anything anyway. You would still have prove existence and uniqueness and proving those is as much work as proving the properties of a more traditional definition.--RDBury (talk) 04:57, 25 November 2011 (UTC)
 * Actually you misquoted the source you gave. With the definition given there the first property you gave (multiplying by a scalar) is trivial, the second one (additivity) not so much and you probably have to build up some machinery in terms of the determinants of elementary matrices.--RDBury (talk) 05:11, 25 November 2011 (UTC)
 * Ooh you are right, I did misscite it. Good catch!  And I understand your argument from above in the other comment.  I don't understand how the scalar multiplication through a row is so trivial based on the new properties... could you elaborate on that?  Perhaps I'm just overlooking something obvious.  — Trevor K. —  08:42, 25 November 2011 (UTC)  — Preceding unsigned comment added by Yakeyglee (talk • contribs)
 * Try rewriting the first matrix: $$\left(\begin{matrix} ta & tb\\ c & d\end{matrix}\right) = \left(\begin{matrix} t & 0 \\ 0 & 1 \end{matrix}\right)\left(\begin{matrix}a & b \\ c& d\end{matrix}\right)$$. 129.234.53.239 (talk) 12:12, 26 November 2011 (UTC)

Definition of '+'
An n-ary relation $$ R $$ is a set of n-tuples. So a binary relation $$+$$ may define as this way: $$+ = \{u:u=(x,y) \,\land \, (x,y) \in +\}$$ But this definition still has a $$+$$ inside... So how can I do more properly?TUOYUTSENG (talk) 04:37, 25 November 2011 (UTC)
 * If you simply want to state that + is a binary relation on $$\mathbb{R}$$, you can write $$+\subseteq\mathbb{R}\times\mathbb{R}$$. Bomazi (talk) 10:00, 25 November 2011 (UTC)
 * + is a binary function, not relation, so it consists of all ordered triples (a,b,c) such that a+b=c, for example it contains the triple (1,2,3) and (10,2,12), and $$+\subseteq\mathbb{R}\times\mathbb{R}\times\mathbb{R}$$ Money is tight (talk) 10:27, 25 November 2011 (UTC)

Thank you.TUOYUTSENG (talk) 11:02, 25 November 2011 (UTC)

Biconditional notation
Is there a distinction between the logic statements $$A \leftrightarrow B$$ and $$A \Leftrightarrow B$$ or are they just two variations on the same notion? I've seen both used before, and I'm just wondering whether there is a subtle distinction. — Trevor K. — 21:34, 25 November 2011 (UTC)  — Preceding unsigned comment added by Yakeyglee (talk • contribs)


 * List of mathematical symbols shows both of them as representing the biconditional.
 * —Wavelength (talk) 21:46, 25 November 2011 (UTC)


 * It appears that the article “Logical biconditional” uses the form with two horizontal bars to represent a meta-relation between two other relations, each of which might have a relation represented by the form with one horizontal bar.
 * —Wavelength (talk) 21:53, 25 November 2011 (UTC)


 * Apparently $$A \leftrightarrow B$$ $$\Leftrightarrow$$ $$C \leftrightarrow D$$ means that "A iff B" iff "C iff D".
 * —Wavelength (talk) 22:00, 25 November 2011 (UTC)


 * The article "If and only if" (permanent link here) says, under "Notation":
 * "The corresponding logical symbols are '↔', '⇔' and '≡', and sometimes 'iff'. These are usually treated as equivalent. However, some texts of mathematical logic (particularly those on first-order logic, rather than propositional logic) make a distinction between these, in which the first, ↔, is used as a symbol in logic formulas, while ⇔ is used in reasoning about those logic formulas (e.g., in metalogic). In Łukasiewicz's notation, it is the prefix symbol 'E'."
 * —Wavelength (talk) 23:50, 25 November 2011 (UTC)


 * I'm not an expert here, but we were always taught the distinction at undergrad level, and I thought it was more or less general in mathematics courses. That is, the little one "↔" is just any statement, like "I'll hang out with you if and only if you give me candy," and the big one, "⇔", implies a tautology, eg. "the previous statement I made is true if and only if it always holds" which sounds like nonsense, but that is true of all tautologies. Another example is "1 + 1 = 2 ⇔ 2 + 2 = 4." IBE (talk) 14:29, 26 November 2011 (UTC)


 * ... not general in mathematics courses -- I've never heard the "tautology" interpretation.   D b f i r s   07:37, 27 November 2011 (UTC)
 * There are different conventions used. In Epp's Discrete Mathematics textbook, p ↔ q is used to mean "if and only if" for logical statements p and q. The ⇔ is used for predicates P(x) and Q(x) like so: P ⇔ Q is shorthand for ∀x P(x) ↔ ∀x Q(x). I've not seen this convention used anywhere else, though I guess it's similar to the tautology convention above. Staecker (talk) 13:11, 27 November 2011 (UTC)


 * Thanks the clarification, confirming once again that the sum total of all human knowledge is greater than the limited undergrad experience of one science major. Wavelength's article snippet had it right to begin with. IBE (talk) 13:33, 27 November 2011 (UTC)