User:Constant314/sandbox

Dipole antenna fields


I feel like we can bring some insight to the near field and the far field. The is stuff that "I know is true," and is "obvious to me," but maybe not to the other editors here. Please let me have your comments, suggestions, improvements.

The upshot is:
 * The electric potential, φ, is a function of only the charge distribution. There are many WP:RS for this.
 * The magnetic vector potential, A, is a function of only the current distribution. There are many RS for this.
 * The charge distribution is a dipole. (Obvious?)
 * The current distribution is a monopole (a current monopole, not a magnetic monopole).
 * Dipole fields die off rapidly with distance compared to monopoles. (Common knowledge)
 * At a distance, only A remains.
 * The E and B fields are given by $$ E = -\nabla \phi - \frac {dA}{dt} $$ and $$ B = \nabla \times A $$.  Many RS.

Noise
Memory aid

The noise voltage of a 1kΩ resister at room temperature is 4 nV per root hertz.

$$ v_{n} = 4.07 \sqrt{\tfrac {T} {300K}} \sqrt{ \tfrac {R} {1 k\Omega}} \tfrac {nV} {\sqrt {Hz} }

$$

Lorentz transformation of the potentials
$$\begin{align}

\phi' &= \gamma \left(\phi - v\mathbf{n}\cdot \mathbf{A} \right) \,, \\

\mathbf{A}' &= \mathbf{A} + (\gamma-1)(\mathbf{A}\cdot\mathbf{n})\mathbf{n} - \frac{\gamma \phi v\mathbf{n} } {c^2} \,.

\end{align}$$

where:
 * $$ \mathbf{A} $$ = magnetic vector potential
 * $$ \phi $$ = scalar electric potential
 * c = speed of light
 * v = magnitude of velocity
 * $$ \mathbf{n} $$ = unit normal in the direction of motion
 * γ = Lorentz factor

ESR
Equivalent series resistance or ESR is the value of a resister posited in series with an ideal reactance used to account for the power losses of an actual reactance. The value is defined for a capacitor by $$R_{esr} = (\text{Disapation factor}) \times {|X_{c}|} = \frac {\text{Disapation factor}} {\omega C}  = (\text{Disapation factor})  \times \omega L $$ where $$C  $$ is the capacitance and $$\omega   $$ is the frequency.

Wave Vector Notation
$$ \tilde{\nu} $$

$$ \tilde{\boldsymbol{\nu}} $$

$$\mathbf{F} = q\mathbf{E} + q\mathbf{v} \times \mathbf{B}$$

$$\overrightarrow{F} = q\overrightarrow{E} + q\overrightarrow{v} \times \overrightarrow{B}$$

$$\overrightarrow{\mathbf{F}} = q\overrightarrow{\mathbf{E}} + q\overrightarrow{\mathbf{v}} \times \overrightarrow{\mathbf{B}}$$

$$\mathbf{F} = q\mathbf{E} + q\mathbf{v} \times \mathbf{B}$$

Real field
Every time I read the intro of electric field, I cringe. It is pretty much 19th century physics. Now, 19th century physics is useful and widely taught, especially to non-physicists. It is great for engineering. Yet it contains misconceptions that hinder and cause frustration for folks trying to “get it” on a deeper level. Let me quote Feynman, which you can read here | The Vector Potential. In section 15-4, Feynman says.


 * "A real field is a mathematical function we use for avoiding the idea of action at a distance."

And


 * "A “real” field is then a set of numbers we specify in such a way that what happens at a point depends only on the numbers at that point."

The chapter is about the vector potential, but Feynman isn't limiting himself to only the vector potential. He is addressing all real fields, including the electric field.

So, this is my elaboration. The field is made of nothing but numbers. The numbers are not unique. Your numbers may be different from my numbers. Thankfully we have the theory of relativity that allows us to understand each other's numbers. The numbers at each point in the field are useful for computing the forces on particles at that point. The field exists only because we humans find it useful. The field is not physical. It is not fundamental. It doesn't move. It doesn't do anything. It is not attached to charge particles. There is only one field of a given type, therefore the proper article is "the", as in "the electric field." A charged particle does not have "an electric field." Electric fields do not interact because there is only one electric field. Electric fields do not propagate. However, we do say, write, and repeat those things. We can find plenty of examples in reliable sources. It is not wrong; it is a type of jargon. It allows to say things using fewer words. If we were writing carefully what we would say is that a charged particle influences the value of the field in its vicinity. The values of the field change dynamically over space and time in accordance with a wave equation. The electric field is such a useful and reliable artifice for computing outcomes, that we sometimes tend to think of it as a physical thing. It is not. It is nothing but imagination. The electric force is real. It does things. The electric field is a purely human construct. Once you embrace that, you can stop wasting time by asking unanswerable questions.

I am not proposing to rewrite the entire article, but only the first few sentences.

Power factor
The general expression for power factor is given by


 * $$ \mbox{power factor} = P/P_a $$
 * $$ P_a = I_{rms} V_{rms} $$

where $$P$$ is the real power measured by an ideal wattmeter, $$I_{rms}$$ is the rms current measured by an ideal ammeter, and $$V_{rms}$$ is the rms voltage measured by an ideal voltmeter. Apparent power, $$P_a$$, is the product of the rms current and the rms voltage.

Periodic waveforms
If the waveforms are periodic with a period that is much shorter than the averaging time of the physical meters, then the power factor can be computed by the following


 * $$ \mbox{power factor} = P/P_a $$
 * $$ P_a = I_{rms} V_{rms} $$
 * $$ P =\frac 1 T \int_{t'}^{t'+T} i(t)v(t) dt $$
 * $$ I_{rms}^2 =\frac 1 T \int_{t'}^{t'+T} {i(t)}^2 dt $$
 * $$ V_{rms}^2 =\frac 1 T \int_{t'}^{t'+T} {v(t)}^2 dt $$

where $$i(t)$$ is the instantaneous current, $$v(t)$$ is the instantaneous voltage, $$t'$$ is an arbitrary starting time, and $$T$$ is the period of the waveforms.

Nonperiodic waveforms
If the waveforms are not periodic and the physical meters have the same averaging time, then the equations for the periodic case can be used with the exception that $$T$$ is the averaging time of the meters instead of the waveform period.

References for reflection coefficient
The reflection coefficient (RC) is a widely used concept that appears in many reliable sources across many subject areas that involve waves. It applies to circuit quantities (voltage and current), electromagnetic quantities (E and B), sonic quantities (velocity and displacement). Typically, the symbol Γ is used for reflection coefficient, although ρ and r also appear in reliable sources.

In circuits, the voltage RC has the opposite sign to the current RC. When not specified, voltage RC is usually assumed.

In electromagnetics, the E-field RC has the opposite sign to the B-field RC. When not specified, E-field RC is usually assumed.

Here are some reliable sources that can be accessed from the internet.


 * Harrington, Time-Harmonic Electromagnetic Fields, p. 55, eq. 2-45  for a wave propagating from media 1 to media 2.  η is the wave impedance of the media.


 * Hayt, Engineering Electromagnetics, 8th ed, p. 321, eq. 73


 * Wadell, Transmission Line Design Handbook, p. 501, eq C.2


 * Steer, Microwave And Rf Design: Transmission Lines, p. 68, eq. 2.59


 * Rosenstark, Transmission Lines in Computer Engineering, p. 23, eq. 2.6

Harrington is a widely cited graduate level text book used in the study of wave guiding structures.

Hayt is a widely cited under-graduate level text book used in electrical engineer schools.

First, I agree that you got the correct result. However, there are three problems with your derivation.


 * Wadell doesn’t give a derivation. He gives the result of the derivation.  It is a reliable source for the result, but it is not a reliable source for the correctness of your derivation.


 * You define Γ as the relative difference between the actual current and the optimum current. That definition does not appear in any reliable source.  It may be coincidentally correct, but it is an observation and not a definition.  Γ is defined as the ratio of the reflected signal to the incident signal.  Its purpose is to let you calculate the amplitude and phase of the reflection.  Its purpose is not for calculating the relative difference between the actual current and the optimum current.  No one cares about that.  They care about the reflected signal.


 * You start with the knowledge of the optimum current, which is the current that you get when there is no reflection. But you don’t know that until you have derived it.  You cannot start with that.

The usual approach is to assume that there are 2 coefficients, Γ and T, such that the reflected and transmitted signals are given by  and v_t = T v_i then you apply the continuity requirements which are:
 * This is the voltage continuity requirement. In words: incident voltage + reflected voltage = transmitted voltage.
 * This is the current continuity requirement. In words: incident current - reflected current = transmitted current.

From that you derive and

which reduces to which can be manipulated into

The flow of results goes like this:
 * Fundament physical requirements → reflection coef → optimum load → optimum current.

What you have done is this:
 * optimum load + optimum current + convenient definition of Gamma → Gamma = reflection coef

All that you have shown is that the algebra at the tail end of derivation is reversable.

The only thing important and notable is the fundament physical requirements that start the entire deductive chain.

Caeseum beam resonator


In a caesium beam frequency reference, timing signals are derived from a high stability voltage-controlled quartz crystal oscillator (VCXO) that is tunable over a narrow range. The output frequency of the VCXO (typically 5 MHz) is multiplied by a frequency synthesizer to obtain microwaves at the frequency of the caesium atomic hyperfine transition (about $9,192.632 MHz$). The output of the frequency synthesizer is amplified and applied to a chamber containing caesium gas which absorbs the microwaves. The output current of the caesium chamber increases as absorption increases.

The remainder of the circuitry simply adjusts the running frequency of the VCXO to maximize the output current of the caesium chamber.

Loaded cable - Heaviside condition


The loaded cable section discusses the Heaviside condition and shows approximations that are valid for cables operating under the Heaviside condition. However, no practical transmission line was ever operated under the Heaviside condition, hence the mathematical approximations are not valid for a typical loaded line. I intend to cut that section back to what is true. That means most of the section.

The plot shows impedance ratios for a typical coaxial cable with three different dielectric insulations. The good curve is typical of modern high quality foam insulation. The medium curve is typical of gutta-percha. The low curve is typical of pulp insulation.

The blue curve is the ratio R/(ωL). At low frequency, R is a constant dominated by the dc resistance, RDC. In the low frequency domain, R/ωL decreases as 1/ω. Around 100 kHz, skin effect becomes dominant and R/ωL decreases as 1/ω0.5

The red curves are G/(ωC). At very low frequency, G is a constant dominated by the dc conductance, GDC. In the very low frequency domain, G/ωC decreases as 1/ω. At some low frequency, which depends on the dielectric, loss tangent takes over and G/ωC→loss tangent, which is more or less a constant.

The Heaviside condition is satisfied where the blue curve touches a red curve.

Loading the transmission line increase inductance. The effect is to push the blue curve toward the left.

Example of an external link


WP:MOS

Sign convention in Fourier transform
I use the same convention that you are advocating. I have no problem with using that convention. If you want to state a reason for that convention in the article you need a reliable secondary source that states that reason. No amount of WP:OR will change that. However, I do not mind dabbling in OR here on the talk page.

Let
 * $$\begin{align} &\hat{f}_{-}(\omega) = \int_{-\infty}^\infty f(t) e^{-i \omega x}\, dt \end{align}$$ This is the conventional forward transform.
 * $$\begin{align} &\hat{f}_{+}(\omega) = \int_{-\infty}^\infty f(t) e^{+i \omega x}\, dt \end{align}$$ This is the other convention.  It is mathematically equal to the conventional reverse transform.

I hope it is obvious that
 * $$\begin{align} &\hat{f}_{-} = \hat{f}_{+}^* \end{align}$$ Thus the results of these two conventions are simply conjugates of each other.

This has no physical effect because physical effects are caused by energy or power. The power of a Fourier transform is computed by multiplying the transform by its conjugate.

Again, I hope it is obvious that
 * $$\begin{align} &\hat{f}_{-} \hat{f}_{-}^* = &\hat{f}_{+} \hat{f}_{+}^* \end{align}$$

So, lets look at a couple of examples. I will suppress multiplicative constants that clutter up the results.

First, consider the Fourier transform of $$ cos( \omega t) $$.
 * The Fourier transform under the usual convention is $$ \delta(\omega-a)+\delta(\omega+a)$$. It has Fourier components at both $$+a$$ and $$-a$$.
 * The Fourier transform under the other convention is $$ \delta(\omega+a)+\delta(\omega-a)$$. The result is exactly the same result.

Next, consider the Fourier transform of $$ sin( \omega t) $$.
 * The Fourier transform under the usual convention is $$ -i\delta(\omega-a)+i\delta(\omega+a)$$. It has Fourier components at both $$+a $$ and $$-a $$.
 * The Fourier transform under the other convention is $$ -i\delta(\omega+a)+i\delta(\omega-a)$$. It has Fourier components at both $$+a$$ and $$-a$$.  The result is the conjugate of the result using the usual convention.

Now let me go way off into OR la-la land to speculate why engineers prefer the usual convention. Consider the Fourier transform of cos(ωt) + sin(ωt). It is $$ \delta(\omega-a)+\delta(\omega+a) -i\delta(\omega-a)+i\delta(\omega+a)$$. The component at the positive frequency of $$+a$$ is $$ \delta(\omega-a) -i\delta(\omega-a)$$. Notice in particular that the sign of the imaginary part is negative. Engineers prefer this because $$ cos( \omega t)+sin( \omega t) $$ lags $$ cos( \omega t)$$ by 45°. When an engineer plots this in Cartesian space, it is [1,-1]. The principal argument is negative. Engineers prefer that because the phase of cos(ωt) + sin(ωt) relative to cos(ωt) is negative. Mathematicians consider cos(ωt) and sin(ωt) as basis vectors and they plot cos(ωt) + sin(ωt) as [1,1]. That is all there is to it. Engineers prefer that the Fourier component of sin(ωt) should be negative at positive frequency.

Wolfram Mathworld
The following quoted material comes from Wolfram Mathworld at. It is copyrighted material, but the terms of use permit this use.

 "  (1) $$\begin{align} f(x) = \int_{-\infty}^\infty F(k) e^{+i 2\pi k x}\, dk \end{align}$$

(2) $$\begin{align} F(k) = \int_{-\infty}^\infty f(x) e^{-i 2\pi k x}\, dx \end{align}$$  "  Here x and k are continuous, unitless domains with no particular meaning.  k comes from the idea of harmonic number with the spacing between harmonics reduced to zero.

and

 "  In general, the Fourier transform pair may be defined using two arbitrary constants  and  as


 * (15) $$\begin{align} F(\omega) = \sqrt { \frac {|b|}{(2\pi)^{1-a}}} \int_{-\infty}^\infty f(t) e^{i b \omega t}\, dt \end{align}$$
 * (16) $$\begin{align} f(t) = \sqrt { \frac {|b|}{(2\pi)^{1+a}}} \int_{-\infty}^\infty F(\omega) e^{-i b \omega t}\, d\omega \end{align}$$

''… By default, the Wolfram Language takes FourierParameters as (0,1). Unfortunately, a number of other conventions are in widespread use. For example, (0,1) is used in modern physics, (1,-1) is used in pure mathematics and systems engineering, (1,1) is used in probability theory for the computation of the characteristic function, (-1,1) is used in classical physics, and (0,-2pi) is used in signal processing.''  " 

Unwinding that yeilds


 * modern physics and  Wolfram default (0,1)
 * $$\begin{align} F(\omega) = \sqrt { \frac {1}{2\pi}} \int_{-\infty}^\infty f(t) e^{+i \omega t}\, dt \qquad f(t) = \sqrt { \frac {1}{2\pi}} \int_{-\infty}^\infty F(\omega) e^{-i \omega t}\, d\omega \end{align}$$


 * classical physics (-1,1)
 * $$\begin{align} F(\omega) = \frac {1}{2\pi} \int_{-\infty}^\infty f(t) e^{+i \omega t}\, dt \qquad f(t) = \int_{-\infty}^\infty F(\omega) e^{-i \omega t}\, d\omega \end{align}$$


 * probability (1,1)
 * $$\begin{align} F(\omega) = \int_{-\infty}^\infty f(t) e^{+i \omega t}\, dt \qquad  f(t) = \frac {1}{2\pi} \int_{-\infty}^\infty F(\omega) e^{-i \omega t}\, d\omega \end{align}$$


 * pure mathematics and systems engineering (1,-1)
 * $$\begin{align} F(\omega) = \int_{-\infty}^\infty f(t) e^{-i \omega t}\, dt \qquad  f(t) = \frac {1}{2\pi} \int_{-\infty}^\infty F(\omega) e^{+i \omega t}\, d\omega \end{align}$$


 * signal processing (0,-2π)
 * $$ F(\omega) = \sqrt { \frac {1}{2\pi}} \int_{-\infty}^\infty f(t) e^{-i 2\pi \omega t}\, dt

\qquad f(t) = \sqrt { \frac {1}{2\pi}} \int_{-\infty}^\infty F(\omega) e^{+i 2\pi \omega t}\, d\omega $$

Doodles

 * $$ Q = \epsilon_0 \iint\limits_S \mathbf{E} \cdot d \mathbf{A} $$ $$

where $H$ and $E$ are the magnitudes of $H$ and $E$. Multiplying the last two equations gives

Dividing (or cross-multiplying) the same two equations gives $H=YE ,$ where


 * $$ \mu '   \mu^'    \mu^{'} $$

From ($$) we obtain

From ($$

Mutual inductance



Don't do this: $$4.35*10^{-17}$$

Instead, do this: $$4.35 \times 10^{-17}$$

Or better still, within text, do this: $$

Propagating Plane Wave
I think a better explanation is that the term Maxwell added allowed him to derive a wave equation that had a propagating solution. If you chase the math, it looks like this:
 * Causal relationships between fields in a propagating plane wave.jpg

From E you can derive D. From D you can derive ∂D/∂t ( the electric displacement current). From that, you derive H. From H you derive B.  From B you derive dB/dt  (the magnetic displacement current). Notice that the arrows mean "is derived from" and do not mean "causes". However, when speaking casually, it is common to interchange the notion of "is derived from" with "causes". As an aside, the two displacement current terms are legitimate fields that can be drawn and plotted just like any other field. So, if you want to intuitively understand how E causes H and H causes E, it is easier if you use four fields. Notice that the two displacement current terms involve differentiation. In a monochromatic wave, that causes 90 degrees of phase shift. The Maxwell–Faraday equation includes a minus sign that provides another 180 degrees of phase shift. If you chase your way around the loop then, you get 360 degrees of phase shift. The gain is "unity". It is exactly the condition for self-sustaining oscillation.


 * [[File:Spatial relationship between the electric field, electric displacement current, and the magnetic field (H).jpg|left|thumb|The spatial relation ship between E, ∂D/∂t, and H.]]

Here you can see the phase shift between E and ∂D/∂t. You can also see how loops of electric displacement current (∂D/∂t) "cause" the H field.


 * [[File:Spatial relationship between the magnetic (H) field, magnetic displacement current, and the electric field.jpg|left|thumb|Depiction of the spatial relationship between H, ∂B/∂t, and E.]]

Here you can see the phase shift between H and ∂B/∂t. You can also see how loops of magnetic displacement current (∂B/∂t) "cause" the E field.

Plane waves in linear media
The propagation factor of a sinusoidal plane wave propagating in the x direction in a linear material is given by


 * $$ P = e^{-jkx}  $$

where
 * $$k = k' - jk = \sqrt{-j \omega \mu (\sigma + j\omega \epsilon)  } = \sqrt{-(\omega \mu  + j \omega \mu ')(\sigma + \omega \epsilon '' + j \omega \epsilon  ') }\;$$ = wavenumber
 * $$k' =$$ phase constant in the units of radians/meter
 * $$k'' =$$ attenuation constant in the units of nepers/meter
 * $$\omega =$$ frequency in the units of radians/meter
 * $$x =$$ distance traveled in the x direction
 * $$\sigma =$$ conductivity in S/meter
 * $$\epsilon = \epsilon' - j\epsilon'' =$$ complex permitivity
 * $$\mu = \mu' - j\mu'' =$$ complex permeability
 * $$j=\sqrt{-1}$$

The sign convention is chosen for consistency with propagation in lossy media. If the attenuation constant is positive, then the wave amplitude decreases as the wave propagates in the x direction.

manipulation
By a straightforward, if lengthy, algebraic calculation


 * $$k = k' - jk'' = \sqrt{-j \omega \mu^'(1-j \chi) j\omega \epsilon^'(1-j(\kappa + \xi) ) } = \omega \sqrt {\mu^' \epsilon^'} \sqrt {(1-j \chi)(1-j(\kappa + \xi)} = \omega \sqrt {\mu^' \epsilon^'} \sqrt {a-jb}$$


 * $$ \xi = \frac {\epsilon^{''}} {\epsilon^{'}}  $$  dielectric loss tangent
 * $$ \chi = \frac {\mu^{''}} {\mu^{'}}   $$ magnetic loss tangent
 * $$ \kappa = \frac {\sigma} {\omega \epsilon^{'}} = \frac {1} {\omega \tau} $$


 * $$ \tau = \frac {\epsilon^{'}} {\sigma } = \rho \epsilon^{'} $$ dielectric relaxation time
 * Typical values for $$ \tau $$: copper: 150 X 10-9 ps ( 106 THz), silicon: 244 ns (650 kHz), polyethylene: 150,000 (1 μHz), soil: 300 ps (500 MHz) to 20 ns (8 MHz)
 * $$ a = (1 -\kappa \chi -\xi \chi )$$
 * $$ b = (\kappa + \chi + \xi )$$

Using the formula for the square root of a complex number a + jb


 * $$ k^' =\omega \sqrt{ \frac {\mu^' \epsilon^'} 2} \sqrt { \sqrt {a^2 + b^2  } +a  }$$
 * $$ k^{''} = \omega \sqrt{ \frac {\mu^' \epsilon^'} 2} \frac b \sqrt { \sqrt {a^2 + b^2  } +a  }$$

If b=0 and a<0 then the previous equation has a divide by zero problem. However, it is mathematically impossible for both b=0 and a<0 at the same time. The second order terms $$ \xi^2, \chi^2, \xi\chi, \xi^2\chi^2  $$ are shown shown in red in the two following equations.


 * $$ k^' =\omega \sqrt{ \frac {\mu^' \epsilon^'} 2} \sqrt { \sqrt {1 \color{Red} + \xi^2 + \chi^2 + \xi^2\chi^2 \color{Black}+(1\color{Red}+\chi^2\color{Black})\kappa^2 +2\xi(1\color{Red}+\chi^2\color{Black})\kappa  } +(1 -\kappa \chi \color{Red}-\xi \chi\color{Black} )  }$$


 * $$ k^{''} = \omega \sqrt{ \frac {\mu^' \epsilon^'} 2} \frac {(\kappa + \chi + \xi )} \sqrt { \sqrt {1 \color{Red} + \xi^2 + \chi^2 + \xi^2\chi^2 \color{Black}+(1\color{Red}+\chi^2\color{Black})\kappa^2 +2\xi(1\color{Red}+\chi^2\color{Black})\kappa  } +(1 -\kappa \chi \color{Red}-\xi \chi\color{Black} )  }$$

low magnetic and dielectric loss
When magnetic and dielectric losses are low, then the second order terms $$ \xi^2, \chi^2, \; \xi\chi, \; \xi^2\chi^2  $$ may be neglected.


 * $$ k^' =\omega \sqrt{ \frac {\mu^' \epsilon^'} 2} \sqrt { \sqrt {1 +\kappa^2 +2\xi\kappa  } +(1 -\kappa \chi )  }$$


 * $$ k^{''} = \omega \sqrt{ \frac {\mu^' \epsilon^'} 2} \frac {(\kappa + \chi + \xi )} \sqrt { \sqrt {1 +\kappa^2 +2\xi\kappa } +(1 -\kappa \chi  )  }$$

no magnetic loss

 * $$ k^' =\omega \sqrt{ \frac {\mu^' \epsilon^'} 2} \sqrt { \sqrt {1 +\kappa^2 +2\xi\kappa  } +1  }$$


 * $$ k^{''} = \omega \sqrt{ \frac {\mu^' \epsilon^'} 2} \frac {(\kappa + \xi )} \sqrt { \sqrt {1 +\kappa^2 +2\xi\kappa } +1  }$$

Substituting $$ \kappa = 1 / (\rho \omega \epsilon^') $$


 * $$ k^' =\omega \sqrt{ \frac {\mu^' \epsilon^'} 2} \frac {\sqrt{\rho \omega \epsilon^'} \sqrt { \sqrt {1 +\kappa^2 +2\xi\kappa  } +1 } } {\sqrt{\rho \omega \epsilon^'}} $$


 * $$ k^{''} = \omega \sqrt{ \frac {\mu^' \epsilon^'} 2} \frac {(\kappa + \xi )\sqrt{\rho \omega \epsilon^'}} { \sqrt{\rho \omega \epsilon^'}\sqrt { \sqrt {1 +\kappa^2 +2\xi\kappa  } +1 }}$$


 * $$ k^' =\sqrt{ \frac { \omega \mu^' } {2 \rho}} { \sqrt { \sqrt {1 +2\xi\rho \omega \epsilon^' +{(\rho \omega \epsilon^')}^2  } +\rho \omega \epsilon^' } }  $$


 * $$ k^{''} = \sqrt{ \frac {\omega \mu^' } {2 \rho}} \frac {1 + \xi\rho \omega \epsilon^' }    { \sqrt { \sqrt {1 +2\xi\rho \omega \epsilon^' +{(\rho \omega \epsilon^')}^2  } +\rho \omega \epsilon^' } }      $$


 * $$ \delta = \sqrt{ \frac {2 \rho} {\omega \mu^' } } \frac    { \sqrt { \sqrt {1 +2\xi\rho \omega \epsilon^' +{(\rho \omega \epsilon^')}^2  } +\rho \omega \epsilon^' } } {1 + \xi\rho \omega \epsilon^' }      $$


 * $$ \delta = \sqrt{ \frac {2 \rho} {\omega \mu^' } } \frac    { \sqrt { \sqrt {1 +2\rho \omega \epsilon^'\tan \delta_e +{(\rho \omega \epsilon^')}^2  } +\rho \omega \epsilon^' } } {1 + \rho \omega \epsilon^' \tan \delta_e}      $$

At high frequency


 * $$ \delta =  \frac    { 2 \rho } {1 + \rho \omega \epsilon^' \tan \delta_e}   \sqrt{ \frac { \epsilon^'} { \mu^' } }   $$

Limiting skin depth in silicon
There is a statement about the skin depth in silicon not being less than about 11m. This statement does not have a reliable source and I believe that it is incorrect, so I will remove it. There does not need to be any justification other than there is no reliable source cited, but I will give a more elaborate justification. The formula, as given, is correct if you can neglect dielectric loss. Any type of loss will decrease skin depth. Using these equations from wavenumber:
 * $$ \delta = \frac 1 {k''} $$
 * $$k = k' - jk = \sqrt{- j \omega \mu (\sigma + j \omega \epsilon)} =\sqrt{-(\omega \mu  + j \omega \mu ')(\sigma + \omega \epsilon '' + j \omega \epsilon ') }\;$$ ,

In the equation for wavenumber, you see $$\sigma + \omega \epsilon '' $$. The dielectric loss term ands to the conductivity term. This effectively reduces resistivity as frequency increases. It is straight forward, but tedious, to carry out all the multiplications, gather terms, and apply the formula for the square root of a complex numbers. The result shows that if there is any dielectric (or magnetic) loss, then there is no non-zero lower bound. If someone would like to check the math, I would be grateful.

I plugged numbers in, assuming a dielectric loss tangent of 0.3% at 10GHz I got 0.9m at 10GHz. The conclusion that skin depth in silicon is deep enough to ignore is still correct. Of course, I could have made a mistake. If anyone would like to run the numbers, I used $$ {\epsilon ''} / {\epsilon '} = 0.003, \epsilon_r=12, \sigma = 435 \mu S/m$$

Feynman Lectures
You can read what Feynman says about field energy here: - Poynting vector and ambiguity of field energy. In sec 27-4: "Anyway, everyone always accepts the simple expressions we have found for the location of electromagnetic energy ... nobody has ever found anything wrong with them ... we believe that it is probably perfectly right."

dBm

 * $$P_\mathrm{dBm} = 10\ \log_{10}(k_\text{B} T \Delta f / 1\,\textrm{mW})\ \textrm{dBm} = 10 \times \log_{10} (1.38 \times 10^{-23} \times 300 \times 1 / 10^{-3} ) = 10 \times \log_{10} ( 4.14 \times 10^{-18} ) = -173.8 $$

Googled "thermal noise power formula"

I found the sengpielaudio calculator. They calculate dBu and dBv, neither of which is dBm.

I also found

Thermal noise in a 50 Ω system at room temperature is -174 dBm / Hz.

calculator produces -173.8277942 dBm

Eq. 2.121 verifies formula for power P = kTB

Verifies Thermal noise power equation

-173.9 dBm⁄Hz

and some printed sources

"Noise power at 300 K = -173.83 dBm/Hz"

"Noise power = kTB"

"P = kTB"

Telegrapher's equations
$$ \gamma = \sqrt{(G+j\omega C)(R+j\omega L)} $$

Since $$ \omega C \gg G \;$$ and $$ \omega L \ll R \;$$

$$ \gamma \approx \sqrt{j \omega C R} = (1+j) \sqrt{\frac {RC \omega}{2}}$$

$$ v = \frac \omega { Im (\gamma) } = \frac \omega { \sqrt{\frac {RC \omega}{2}}} =\sqrt { \frac {2 \omega}{RC} } $$

Dynamical variables
These references appear in the Gyrator–capacitor model article:

Hamill ,

Mohammad ,

González ,

Lambert

Magnetic voltage
Magnetic voltage, $$ v_m $$, is an alternate name for magnetomotive force (mmf), $$\mathcal{F} $$, which is analogous to electrical voltage in an electric circuit. The SI unit of mmf is the ampre or amp-turn. Not all authors use the term magnetic voltage. The magnetomotive force applied to an element between point A and point B is equal to the line integral through the component of the magnetic field strength, $$ \mathbf{H} $$.


 * $$v_m = \mathcal{F}= - \int_A^B \mathbf{H}\cdot\operatorname{d}\mathbf{l}$$

The resistance–reluctance model uses the same equivalence between magnetic voltage and magnetomotive force.

Magnetic current
Magnetic current, $$i_m$$, is an alternate name for the time rate of change of flux, $$\dot \Phi$$, which is analogous to electrical current in an electric circuit. In the physical circuit, $$\dot \Phi$$, is magnetic displacement current. The SI unit of $$\dot \Phi$$ is webers/second or volts. The magnetic current flowing through an element of cross section, $$S$$, is the area integral of the magnetic flux density $$ \mathbf{B} $$.


 * $$ i_m = \dot \Phi = \frac {d} {dt} \int_S \mathbf{B}  \cdot\operatorname{d}\mathbf{S}$$

The resistance–reluctance model uses a different equivalence, taking magnetic current to be an alternate name for flux, $$ \Phi$$. This difference in the definition of magnetic current is the fundamental difference between the gyrator-capacitor model and the resistance–reluctance model. The definition of magnetic current and magnetic voltage imply the definitions of the other magnetic elements.

Magnetic charge
Magnetic flux, $$ \Phi$$, in the physical circuit is the analog of charge, $$Q$$, in the model circuit.

Magnetic flux
Charge, $$ Q$$, in the physical circuit is the analog of flux in the model circuit. The units of charge (coulombs or ampere x seconds) and flux (weber or volts x seconds) are the duals of each other.

Summary of analogy between magnetic circuits and electrical circuits
The following table summarizes the mathematical analogy between electrical circuit theory and magnetic circuit theory.