User:Tsukitakemochi/sandbox

Proof
Consider a solid body subjected to a pair of external force systems, referred to as $$F^P_i$$ and $$F^Q_i$$. Consider that each force system causes a displacement fields, with the displacements measured at the external force's point of application referred to as $$d^P_i$$ and $$d^Q_i$$.

When the $$F^P_i$$ force system is applied to the structure, the balance between the work performed by the external force system and the strain energy is:



\frac{1}{2}\sum^n_{i=1}F^P_id^P_i = \frac{1}{2}\int_\Omega \sigma^P_{ij}\epsilon^P_{ij}\,d\Omega $$

The work-energy balance associated with the $$F^Q_i$$ force system is as follows:



\frac{1}{2}\sum^n_{i=1}F^Q_id^Q_i = \frac{1}{2}\int_\Omega \sigma^Q_{ij}\epsilon^Q_{ij}\,d\Omega $$

Now, consider that with the $$F^P_i$$ force system applied, the $$F^Q_i$$ force system is applied subsequently. As the $$F^P_i$$ is already applied and therefore won't cause any extra displacement, the work-energy balance assumes the following expression:



\frac{1}{2}\sum^n_{i=1}F^P_id^P_i + \frac{1}{2}\sum^n_{i=1}F^Q_id^Q_i + \sum^n_{i=1}F^P_id^Q_i = \frac{1}{2}\int_\Omega \sigma^P_{ij}\epsilon^P_{ij}\,d\Omega + \frac{1}{2} \int_\Omega \sigma^Q_{ij}\epsilon^Q_{ij}\,d\Omega + \int_\Omega \sigma^P_{ij}\epsilon^Q_{ij}\,d\Omega $$

Conversely, if we consider the $$F^Q_i$$ force system already applied and the $$F^P_i$$ external force system applied subsequently, the work-energy balance will assume the following expression:



\frac{1}{2}\sum^n_{i=1}F^Q_id^Q_i + \frac{1}{2}\sum^n_{i=1}F^P_id^P_i + \sum^n_{i=1}F^Q_id^P_i = \frac{1}{2}\int_\Omega \sigma^Q_{ij}\epsilon^Q_{ij}\,d\Omega + \frac{1}{2}\int_\Omega \sigma^P_{ij}\epsilon^P_{ij}\,d\Omega + \int_\Omega \sigma^Q_{ij}\epsilon^P_{ij}\,d\Omega $$

If the work-energy balance for the cases where the external force systems are applied in isolation are respectively subtracted from the cases where the force systems are applied simultaneously, we arrive at the following equations:



\sum^n_{i=1}F^P_id^Q_i = \int_\Omega \sigma^P_{ij}\epsilon^Q_{ij}\,d\Omega $$



\sum^n_{i=1}F^Q_id^P_i = \int_\Omega \sigma^Q_{ij}\epsilon^P_{ij}\,d\Omega $$

If the solid body where the force systems are applied is formed by a linear elastic material and if the force systems are such that only infinitesimal strains are observed in the body, then the body's constitutive equation, which may follow Hooke's law, can be expressed in the following manner:



\sigma_{ij}=D_{ijkl}\epsilon_{kl} $$

Replacing this result in the previous set of equations leads us to the following result:



\sum^n_{i=1}F^P_id^Q_i = \int_\Omega D_{ijkl}\epsilon^P_{ij}\epsilon^Q_{kl}\,d\Omega $$



\sum^n_{i=1}F^Q_id^P_i = \int_\Omega D_{ijkl}\epsilon^Q_{ij}\epsilon^P_{kl}\,d\Omega $$

If we subtracting both equations then we obtain the following result:



\sum^n_{i=1}F^P_id^Q_i = \sum^n_{i=1}F^Q_id^P_i $$

Proofa
Consider a solid body subjected to a pair of external force systems, referred to as $$F^P_i$$ and $$F^Q_i$$. Consider that each force system causes a displacement fields, with the displacements measured at the external force's point of application referred to as $$d^P_i$$ and $$d^Q_i$$.

When the $$F^P_i$$ force system is applied to the structure, the balance between the work performed by the external force system and the strain energy is:



\frac{1}{2}\sum^n_{i=1}F^P_id^P_i = \frac{1}{2}\int_\Omega \epsilon^P_{ij}\sigma^P_{ij}\,d\Omega $$

The work-energy balance associated with the $$F^Q_i$$ force system is as follows:



\frac{1}{2}\sum^n_{i=1}F^Q_id^Q_i = \frac{1}{2}\int_\Omega \epsilon^Q_{ij}\sigma^Q_{ij}\,d\Omega $$

Now, consider that with the $$F^P_i$$ force system applied, very small $$F^Q_i$$ force system is applied subsequently. As the $$F^P_i$$ is already applied and therefore won't cause any extra displacement, the work-energy balance assumes the following expression:



\frac{1}{2}\sum^n_{i=1}F^P_id^P_i + \sum^n_{i=1}F^P_id^Q_i = \frac{1}{2} \int_\Omega \epsilon^P_{ij}\sigma^P_{ij}\,d\Omega + \int_\Omega \epsilon^P_{ij}\sigma^Q_{ij}\,d\Omega $$

Conversely, if we consider the $$F^Q_i$$ force system already applied and very small $$F^P_i$$ external force system applied subsequently, the work-energy balance will assume the following expression:



\frac{1}{2}\sum^n_{i=1}F^Q_id^Q_i + \sum^n_{i=1}F^Q_id^P_i = \frac{1}{2}\int_\Omega \epsilon^Q_{ij}\sigma^Q_{ij}\,d\Omega + \int_\Omega \epsilon^Q_{ij}\sigma^P_{ij}\,d\Omega $$

If the work-energy balance for the cases where the external force systems are applied in isolation are respectively subtracted from the cases where the force systems are applied simultaneously, we arrive at the following equations:



\sum^n_{i=1}F^P_id^Q_i = \int_\Omega \sigma^P_{ij}\epsilon^Q_{ij}\,d\Omega $$



\sum^n_{i=1}F^Q_id^P_i = \int_\Omega \sigma^Q_{ij}\epsilon^P_{ij}\,d\Omega $$

If the solid body where the force systems are applied is formed by a linear elastic material and if the force systems are such that only infinitesimal strains are observed in the body, then the body's constitutive equation, which may follow Hooke's law, can be expressed in the following manner:



\sigma_{ij}=D_{ijkl}\epsilon_{kl} $$

Replacing this result in the previous set of equations leads us to the following result:



\sum^n_{i=1}F^P_id^Q_i = \int_\Omega D_{ijkl}\epsilon^P_{ij}\epsilon^Q_{kl}\,d\Omega $$



\sum^n_{i=1}F^Q_id^P_i = \int_\Omega D_{ijkl}\epsilon^Q_{ij}\epsilon^P_{kl}\,d\Omega $$

If we subtracting both equations then we obtain the following result:



\sum^n_{i=1}F^P_id^Q_i = \sum^n_{i=1}F^Q_id^P_i $$

Position Vectors
Triangle centers can be written as following


 * $$P=\frac{w_A A + w_B B + w_C C} {w_A + w_B + w_C}.$$

Here, $$P,A,B,C$$ are position vectors, and, coordinates $$w_A, w_B, w_C$$ are scalars whose definition corresponds each center instances can be seen in the following table, where, $$a, b, c$$ are side lengths, and, $$S$$ is area of the triangle that Heron's formula can be utilized to get.


 * $$a\equiv\overline{BC}=\sqrt{(\vec{BC},\vec{BC})},$$
 * $$b\equiv\overline{CA}=\sqrt{(\vec{CA},\vec{CA})},$$
 * $$c\equiv\overline{AB}=\sqrt{(\vec{AB},\vec{AB})},$$
 * $$16S^2=(a^2 + b^2+c^2)^2-2(a^4 + b^4+c^4).$$

五心の位置ベクトル
三角形五心 (重心、内心、傍心、外心、垂心) の位置ベクトル $$P$$ は、頂点の位置ベクトル $$A,B,C$$ を用いて、一般式


 * $$P=\frac{w_A A + w_B B + w_C C} {w_A + w_B + w_C}$$

で記述される. $$w_A,w_B,w_C$$ は、次の表に整理される重みである. $$S$$ はヘロンの公式でも得られる三角形の面積.


 * $$a\equiv\overline{BC}=\sqrt{(\vec{BC},\vec{BC})},$$
 * $$b\equiv\overline{CA}=\sqrt{(\vec{CA},\vec{CA})},$$
 * $$c\equiv\overline{AB}=\sqrt{(\vec{AB},\vec{AB})},$$
 * $$16S^2=(a^2 + b^2+c^2)^2-2(a^4 + b^4+c^4).$$

Cartesian Derivation
Triangle centers can be written as following


 * $$P=\frac{w_A A + w_B B + w_C C} {w_A + w_B + w_C}.$$

Here, $$w_A, w_B, w_C$$ are coordinates whose definition corresponds each center. Several examples for major centers can be seen in the following table.


 * $$a\equiv\overline{BC}=\sqrt{(\vec{BC},\vec{BC})},$$
 * $$b\equiv\overline{CA}=\sqrt{(\vec{CA},\vec{CA})},$$
 * $$c\equiv\overline{AB}=\sqrt{(\vec{AB},\vec{AB})},$$
 * $$i_A\equiv(\vec{AB},\vec{AC}),$$
 * $$i_B\equiv(\vec{BC},\vec{BA}),$$
 * $$i_C\equiv(\vec{CA},\vec{CB}).$$

Such derivation is most convenient when triangles are defined by three vertices. As, triangles also can be defined by three (infinite) "lines", there exists another approach. In such cases, trilinear coordinates stated in another section may supply a better means.

= polynomial =

Definitions
Monomial: Although there can be a small ambiguity in the term monomial(see that section), here, it is defined as a product of nonzero constant called coefficient and no or several variables. In case coefficient is $$1$$ and variable(s) exist, $$1$$ can be omitted. Duplication on variables is allowed, so, $$-\pi r^2$$ and $$(3-4i)x^4yz^{13}$$ are monomials($$-\pi$$ and $$3-4i$$ are coefficients, and $$r$$, $$x$$, $$y$$, $$z$$ are variables).

Polynomial: It is a sum of several terms. In special cases, monomial itself may be included. $$P$$ in the following form is a polynomial where every $$m_i$$ is monomial,
 * $$P = \sum_{i=1}^n m_i.$$

Domain of polynomial: In the latest definition of polynomial $$p$$, if every $$m_i$$ is in a domain(complex, real, integer), then $$P$$ can be called in that domain. For example, $$-2x^2y+3xy^2+1$$ has only integer factors, so, it is in an integer domain and can be called an integer polynomial. On the other hand, $$\pi r^2+1$$ is not an integer polynomial.

Factor(of polynomials): If polynomial $$p$$ can be a product of several other polynomials $$q_i$$, every $$q_i$$ is a factor of $$p$$, and in other words, $$p$$ is factorized into $$q_i$$,


 * $$p = \prod_{i=1}^m q_i.$$

Domain in Factorization: Domain in factorization is a domain a polynomial's factors(each of them itself is a polynomial) shall be in. If all the factor is in some small domain, product $$P$$ shall be in the same or a smaller domain, so, domain in factorization may be comparably larger than that of $$P$$. Sometimes polynomial $$P$$ might accidentally be in a small domain, domain in factorization should be assigned beforehand.

Univariant: It is a class of polynomials with a single variable (intermediate). $$-2x^2+3x+1$$ is univariant, and $$xy+2yz$$ is not.

Selection of a Field(or a Domain of Factorization)
In this section, univariant cases are mainly studied.

Factorizing a polynomial $$P$$ has a close relation with solving value(s) of its variable that satisfy $$P=0$$. It is especially so in univariant polynomial cases(variable is $$x$$ only, for example).

Complex Number
In an usual meaning, it is a largest sized domain. It is also smartest because for a polynomial of order n, dividing into n time product of degree 1 factors is always assured.

Let $$a_n \dots a_0$$ be coefficients, polynomial $$P$$ can be factorized into irreducible factors as


 * $$P=a_n x^n+a_{n-1} x^{n-1}+\dots +a_0=a_n(x-z_1)\dots(x-z_n)$$.

If $$x$$ equals one of $$z_1 \dots z_n$$, one of terms in rightest side above will be zero(so $$P=0$$). If $$x$$ equals neither of $$z_k$$, $$P$$ can not be zero. That's why, each of $$z_1 \dots z_n$$ must be a solution of $$P=0$$, and simultaneously, each solution $$x$$ of $$P=0$$ must be included in the list $$z_1 \dots z_n$$. This means $$z_1 \dots z_n$$ are the set of polynomial $$P$$'s solution.

To show examples of such a factorization, following is an example,


 * $$x^2 + 2x+2=(x + 1+i)(x + 1-i).$$

Real Number
As, real number is a subset of complex number, polynomial of real number domain can be factorized into complex number factors as above. Although, if real number domain is preferred, it can be factorized into utmost degree 2 real domain factors as following,


 * $$P=a_n(x-z_1)\dots(x-z_m)(x^2+v_1 x+w_1)\dots(x^2+v_q x+w_q)$$.

Here, all of $$z_1 \dots z_m$$, $$v_1 \dots v_q$$, and $$w_1 \dots w_q$$ are real.

To show examples of such a factorization, following are examples,


 * $$x^2 - 2=(x+\sqrt{2})(x-\sqrt{2}),$$
 * $$x^4 + 1=(x^2 + \sqrt{2} x+1)(x^2 - \sqrt{2} x+1).$$

Rational Number
Polynomials of rational number domain is factorized into rational number factors as the following.


 * $$x^2 - (1/4)=(x+(1/2))(x-(1/2)).$$

This case might have just small importance, because, if lcm of mother numbers is multiplied to both sides, it will be transformed into integer domain problem as the following


 * $$4x^2 - 1=(2x+1)(2x-1).$$

Integer
As, integer is a subset of real number, polynomial of integer domain can be factorized into either real or complex number factors. Although, if integer domain is preferred as factors, such a restriction is sometimes possible, however, there is no assurance on solution's existence. As an example, $$x^2 - 2$$ cannot be factorized except its original form if real domain is ordered for target factor(s).

To solve a problem of integer domain problems, hand calculation can be used, because it is just find a solution among finite candidates.

If you have a computer means to solve real or complex problem, another way is to utilize them, and next, just pick up or multiply some into integer domain factors.


 * Factorization of polynomials over finite fields

In days prior to computers were widely used, factorizing polynomials in finite steps was regarded especially important and studied. Such ways are not adequate for general cases. For example, in the following equations


 * $$x^2+3x+1=(x+{\frac{3+{\sqrt{5}}}2})(x+{\frac{3-\sqrt{5}}2}),$$


 * $$x^4+1=(x^2+{\sqrt{2}}x+1)(x^2-\sqrt{2}x+1),$$

all factors in a expanded form(left sides) are rational, while factors for terms within each bracket(of right sides) might be not. Actually, irrational numbers cannot be calculated(nor expressed) accurately in finite steps. Following is a method to find a solution within finite number of candidates; valid in cases factors are known to be rational(ex. prepared by a teacher).

"Integer polynomials must factor into integer polynomial factors"(in "Kronecker's method")
Quiz: In most cases, the following polynomial
 * $$x^2+bx+c$$

can be decomposed into such a form
 * $$(x+\frac{b+\sqrt{b^2-4c}}{2})(x+\frac{b-\sqrt{b^2-4c}}{2}).$$

However, in some cases, it doesn't work. Can you show one of exceptional cases?

Answer: Like
 * $$b=3, c=1.$$

Because as "integer polynomials must factor into integer polynomial factors", decomposing into


 * $$(x+\frac{3+\sqrt{5}}{2})(x+\frac{3-\sqrt{5}}{2}).$$

is inhibited. But, is the answer right? Opinions requested.


 * $$a\equiv\overline{BC}=\sqrt{(\vec{BC},\vec{BC})},$$
 * $$b\equiv\overline{CA}=\sqrt{(\vec{CA},\vec{CA})},$$
 * $$c\equiv\overline{AB}=\sqrt{(\vec{AB},\vec{AB})},$$

Application to finding repeated factors
To get a root $$x=a$$ that satisfies $$0=f(x)$$ of a factorial polynomial as


 * $$f(x)=(x-a)g(x)$$

where $$g(x)$$ is the quotient polynomial, it is irritating if answer has not enough accuracy that the system usually assures. It is conspicuous when '$$a$$' above is one of famous numbers(integers, rational numbers, analytic numbers(like $$\sqrt{2}$$)). Actually, to find a root of


 * $$f(x)=(x-a)(x-b)$$,

by $$0=f(x)$$, some sort of repetitive methods (for example Newton-Raphson) is necessary to solve the root. A simple repetitive method is enough if $$a \ne b$$, however, just small accuracy is expected in multiple root cases. One example is


 * $$0=f(x)=(x-a)^3$$.

In procedure of a repetitive method, '$$x$$' relatively close to the accurate value(say $$x=a \plusmn 0.0001$$) gives a result very close to zero ($$f(x)=0.000000000001$$). So, using value of $$f(x)$$ itself is not an ideal way to seek the zero point of '$$x$$'. In a expectation that, although different roots might have the same value(multiple roots), roots of different values shall not be close, using formal derivative is effective to get a punctual value of '$$x$$'. The second derivative of above polynomial is


 * $$f''(x)=6(x-a)$$,

and you can get the root seeking $$0=f''(x)$$ with a good accuracy. On the contrary, if roots of very close values exist, it is disguisable. For example


 * $$f(x)=(x-a+0.0001)(x-a)(x-a-0.0001)$$.

Here, the second derivative will be the same as above


 * $$f''(x)= 6(x-a)$$,

where small differences $$\plusmn 0.0001$$ for '$$x$$' are neglected.

Thus, using formal derivative is not almighty, however, it assures high probability to get amiable answers in most cases.