User:Foxjwill/Math stuff

Total derivative

 * $$\frac{\mathrm df}{\mathrm dx}= \frac{\partial f}{\partial x}+\sum_{j=1}^k \frac{\partial y_j}{\partial x}\frac{\partial f}{\partial y_j},$$

where $$f$$ is a function of $$k$$ variables.

Example 1

 * $$ \frac{\mathrm df}{\mathrm dx}=\frac{\partial f}{\partial x} + \frac{\partial y}{\partial x} \frac{\partial f}{\partial y}, $$

where $$f: \mathbb{R}, \mathbb{R} \to \mathbb{R}$$.

Example 2
Given $$f(x,y) = x^3 - y^2\,,$$ find $$\textstyle \frac{\mathrm df}{\mathrm dx}.$$

Begin with the definition of the total derivative: $$ {\textstyle \frac{\mathrm df}{\mathrm dx}=\frac{\partial f}{\partial x} + \frac{\partial y}{\partial x} \frac{\partial f}{\partial y} }$$. Notice that in order to continue, we need to calculate $$\textstyle \frac{\partial f}{\partial x},\, \textstyle \frac{\partial y}{\partial x},\, $$ and $$\textstyle \frac{\partial f}{\partial y}.$$



\begin{align} \frac{\partial f}{\partial x} &= \frac{\partial}{\partial x}\left( x^3 \right)\\ &= 3x^2 \end{align} $$



\begin{align} \frac{\partial f}{\partial y} &= \frac{\partial}{\partial x}\left( y^2 \right)\\ &= -2y \end{align} $$



\begin{align} y^2                            &= x^3\\ 2y\frac{\partial y}{\partial x} &= 3x^2\\ \frac{\partial y}{\partial x} &= \frac{3x^2}{2y} \end{align} $$

Plugging the results into the definition, $$ {\textstyle \frac{\mathrm df}{\mathrm dx} = 3x^2 + \frac{3x^2}{-2y} \left( 2y \right)}$$, we find that $$ {\textstyle \frac{\mathrm df}{\mathrm dx} = 0}.$$

Continued fractions


L = 0 + \cfrac{1}{ 0 + \cfrac{1}{ 0 + \cfrac{1}{ 0 + \cfrac{1}{\ddots \,} } } } $$



\frac{1}{L} = \cfrac{1}{ 0 + \cfrac{1}{ 0 + \cfrac{1}{ 0 + \cfrac{1}{ 0 + \cfrac{1}{\ddots \,} }   }  } } = L $$



\begin{align} L^2 &= 1\\ L  &= \pm \sqrt{1} \end{align} $$

Because $$L$$ can't be negative, $$L = 1$$.

\therefore\ 0 + \cfrac{1}{ 0 + \cfrac{1}{ 0 + \cfrac{1}{ 0 + \cfrac{1}{\ddots \,} } } } = 1 $$

Tetration and beyond
a\cdot b           &=& a + a\cdot (b-1)            &,a\cdot 0 = 0\\ a^b                &=& a\cdot a^{b-1}            &,a^0 = 1\\ a\uparrow\uparrow b &=& a^{a\uparrow\uparrow (b-1)} &,a\uparrow\uparrow 0 = 1 \end{array}$$
 * $$\forall a,b \in \mathbb{N}$$
 * $$\begin{array}{lcll}

Polynomials and their derivatives
The derivative of a polynomial,
 * $$\frac{d}{dt}: \mathbf{P}^n \to \mathbf{P}^n$$,

can be defined as
 * $$\frac{d}{dt}(a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n) = a_1 + a_2 x + a_3 x^2 + \cdots + a_{n} x^{n-1}$$.

If we use the standard ordered basis
 * $$\mathbf{e} = \{1,x,x^2,\ldots,x^n\}$$,

then
 * $$(a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n)_\mathbf{e}$$

can be written as

\begin{pmatrix} a_0\\ a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix} $$, and $$\frac{d}{dt}$$ as
 * $$\frac{d}{dt}:

\begin{pmatrix} a_0\\ a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix} \mapsto \begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_n\\ 0 \end{pmatrix} $$. Since
 * $$A=\begin{pmatrix}

0     & 1      & 0      & \cdots & 0\\ 0     & 0      & 1      & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0     & 0      & 0      & \cdots & 1\\ 0     & 0      & 0      & \cdots & 0\\ \end{pmatrix} $$ satisfies

A \begin{pmatrix} a_0\\ a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix} = \begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_n\\ 0 \end{pmatrix} $$, $$A$$ represents $$\frac{d}{dt}$$.

Wedge product
$$ \mathbf{u} = u_1 \mathbf{e_1} + u_2 \mathbf{e_2} + u_3 \mathbf{e_3} $$

$$ \mathbf{v} = v_1 \mathbf{e_1} + v_2 \mathbf{e_2} + v_3 \mathbf{e_3} $$

$$ \begin{align} \mathbf{u}\wedge\mathbf{v} &= (u_1 v_2 - v_1 u_2)\wedge\mathbf{e_1} + (u_2 v_3 - v_2 u_3)\wedge\mathbf{e_2} + (u_3 v_1 - v_1 u_3)\wedge\mathbf{e_3}\\ &= u_1 v_1 \end{align} $$

General second degree linear ordinary differential equation
A second degree linear ordinary differential equation is given by


 * $$y'' + a(x) y' + b(x) y = c(x).\,$$

One way to solve this is to look for some integrating factor, $$M$$, such that


 * $$My + Ma(x)y' + Mb(x)y = (My) = Mc(x).\,$$

Expanding $$(My)''$$ and setting it equal to

$$My + Ma(x)y' + Mb(x)y = My + 2My' + M''y\,$$

$$ \left \{ \begin{align} 2M' &= M a(x)\\ M'' &= M b(x)\\ \end{align} \right. $$

$$ M'' a(x) = 2 M' b(x) $$

$$ u = M' $$

$$ \frac{du}{dx} a(x) = 2u b(x) $$

$$ \frac{1}{2}{\int \frac{du}{u} } = {\int \frac{b(x)}{a(x)} dx} $$

$$ \frac{1}{2} \ln u = {\int \frac{b(x)}{a(x)} dx} $$

$$ u = e^{2{\int \frac{b(x)}{a(x)} dx}} $$

$$ M' = e^{2{\int \frac{b(x)}{a(x)} dx}} $$

$$ M = {\int e^{2{\int \frac{b(x)}{a(x)} dx}} dx} $$

$$ \begin{align} (My)'' &= Mc(x)\\ (My)' &= {\int Mc(x) dx} + C_1\\ My    &={\int {\int Mc(x) dx}dx} + C_1 x + C_2\\ \end{align} $$

$$ y = \frac{{\int {\int {\int c(x) e^{2{\int \frac{b(x)}{a(x)} dx}} dx} dx}dx} + C_1 x + C_2} $$

Differential example
The key to differentials is to think of $$x$$ as a function from some real number $$p$$ to itself; and $$dx$$ as a function of some that same real number $$p$$ to a linear map $$\mathbf{P}: \mathbb{R} \mapsto \mathbb{R}.$$ Since all linear maps from $$\mathbb{R}$$ to $$\mathbb{R}$$ can be written as a $$1\times 1$$ matrix, we can define $$\mathbf{P}$$ as $$[p]$$ and $$dx$$ as



dx: \mathbb{R} \rightarrow \mathbb{R}^{1\times 1} $$



dx: p \mapsto [1] $$

(As a side note, the value of $$dx$$, and similarly for all differentials, at $$p$$ is usually written $$dx_p$$.)

Without loss of generality, let's take the function $$f(x) = x^2$$. Differentiating, we have


 * $$\frac{df}{dx} = 2x.$$

Since we defined $$dx_p$$ as $$[1]$$ and $$x(p)$$ as $$p$$, we can rewrite the derivative as


 * $$df_p [1]^{-1} = 2x(p).\,$$

Multiplying both sides by $$[1]$$, we have


 * $$df_p = 2x(p) [1].\,$$

And voilà! We can say that for any function $$f: \mathbb{R} \rightarrow \mathbb{R}$$,


 * $$df_p = \left[f'(x(p))\right] = f'(x(p))dx_p $$