Wikipedia:Reference desk/Archives/Mathematics/2012 July 21

= July 21 =

System of bilinear equations
We know that system of linear equations can be solved in polynomial time (in terms of input bits). I want to know how we solve bilinear equations and if we can solve, can we solve in polynomial time (in terms of input bits)? — Preceding unsigned comment added by Karun3kumar (talk • contribs) 15:32, 21 July 2012 (UTC)
 * See System of polynomial equations. Bo Jacoby (talk) 18:45, 21 July 2012 (UTC).
 * Here is a PDF of a 1997 paper called Systems of bilinear equations that discusses the general problem and how it can be solved. Looie496 (talk) 19:19, 21 July 2012 (UTC)

(2x)^y=x
what is y?

example: 9^y=4.5

thank you — Preceding unsigned comment added by 79.180.141.120 (talk) 19:13, 21 July 2012 (UTC)
 * $$y = \log_{2x} x = \frac{\ln x}{\ln 2x} $$ (for x > 0)--Wrongfilter (talk) 20:24, 21 July 2012 (UTC)
 * $$=\frac{\ln x}{\ln 2 + \ln x} = \frac 1 {1 + \frac {\ln 2}{\ln x}}$$ --CiaPan (talk) 20:40, 21 July 2012 (UTC)

Function inversion via control theory
I have an unknown function $$f: \mathbb R^2\to\mathbb R^2$$; writing $$(\xi,\eta)=f(x,y)$$, we have $$\frac{\partial\xi}{\partial x}>0$$ and $$\frac{\partial\eta}{\partial y}>0$$ everywhere (a sort of monotonicity). I suspect that f is convex in the same sense, but doing without that assumption would of course be more powerful.

I would like to evaluate (with a precision dependent on the computational effort expended) $$f^{-1}(\xi,\eta)$$ and the simpler $$f_x^{-1}(\eta)$$ where $$f_x(y):=f(x,y)$$. The numerical tool I have available takes the form of a dynamical system on $$\mu=(x,y,\ldots)$$ (where the dots indicate many additional dimensions). In this dynamical system $$\frac{dx}{dt}=\frac{dy}{dt}=0$$ (the reason for calling them dimensions will be given in a moment), and there exist known functions $$\Xi(\mu)$$ and $$H(\mu)$$ such that $$\lim_{t\to\infty}\frac1t\int_0^\infty\bigl(\Xi(\mu(\tau)),H(\mu(\tau))\bigr)\,d\tau=f(x,y)$$. However, $$\lim_{t\to\infty}\Xi(\mu(\tau))$$ and $$\lim_{t\to\infty}H(\mu(\tau))$$ do not exist (they oscillate chaotically and thus serve as something like pink noise), and they may differ significantly from $$f(x,y)$$ for some time after whatever initial state.

So far, the obvious approach is to choose a $$\mu$$ for each of a number of pairs $$(x,y)$$, evaluate $$f(x,y)$$ by integrating $$\mu$$ for some period of time (dependent on desired accuracy), and then obtain some sort of fit $$\tilde f(x,y)$$ and thence $$\tilde f^{-1}(\xi,\eta)\approx f^{-1}(\xi,\eta)$$.

However, the reason x and y were included in $$\mu$$ is that there may exist a better algorithm that approaches the solution continuously (a sort of optimal control and/or stochastic filter) by varying them during one (long) integration. The system will take time to "recover" towards $$f(x,y)$$ after such a change, and it's easy to overcontrol by reacting to fluctuations in $$\Xi(\mu)$$ and $$H(\mu)$$, so what's the best approach here? --Tardis (talk) 22:53, 21 July 2012 (UTC)


 * $$\frac{dx}{dt}=\frac{dy}{dt}=0$$ or $$1$$? 75.166.200.250 (talk) 06:41, 22 July 2012 (UTC)
 * 0. The only way they change is by external intervention in the controlling algorithm.  --Tardis (talk) 07:12, 22 July 2012 (UTC)
 * I feel terrible that I can't wrap my mind around the textual description of this. I always do this on the Math Desk, but if you could explain a little more about the application I might be able to help more, but for now, all I can offer is to suggest going through PID controller and seeing if anything jumps out at you. 75.166.200.250 (talk) 03:57, 23 July 2012 (UTC)
 * All that really jumps out at me from the PID scheme is that the P term considered to be the most fundamental seems questionably useful; the solution of $$u=K_p(S(u)-S^*)$$ will give $$S(u)\approx S^*$$ only if it happens that $$S(0)\approx S^*$$. It's the integral term that's useful, as can be trivially seen by differentiating: $$\dot u(t)=K_p\dot e(t)+K_i e(t)+K_d\ddot e(t)$$, so that u decays exponentially to whatever value causes $$e=0$$.  Lots of linked articles seem relevant, though, like lead-lag compensator and integral windup, which can occur here not through saturation but via the lag in the approach of $$(\Xi(\mu),H(\mu))$$ to $$(\xi,\eta)$$.
 * As for the application, it's a molecular dynamics simulation: x is density, y is specific energy, $$\xi$$ is pressure, and $$\eta$$ is temperature. You can change density and specific energy (within reason) at any time by scaling coordinates (for density) or velocities (for specific energy).  Then the system will adopt a new structure (producing a new pressure) and rebalance its mix of potential and kinetic energy (producing a new temperature).  However, the instantaneous measure of temperature is simply kinetic energy (up to a scalar constant), so it oscillates forever around the new value; pressure behaves similarly.  The goal is to adjust the density and specific energy in response to produce a goal average pressure and temperature without being mislead by noise or oscillations and without overcorrecting because of the delay before a new oscillation is adopted.  Standard barostats and thermostats exist, but they tend to hold the pressure and temperature constant (and thus drive fluctuations in the density and energy) rather than allowing them to oscillate naturally.  --Tardis (talk) 06:38, 24 July 2012 (UTC)