Wikipedia:Reference desk/Archives/Mathematics/2016 May 12

= May 12 =

System of Linear Eequations
Is the following statement true? This seems to me wrong, but I can't find any counterexample...

Statement: Let $$ k > 1$$, and let a system of linear equations such that each equation has exactly k terms $$y_i\in\{x_i,1-x_i\}$$ where $$\{x_i\}$$ are the variables of these equations. That is, we have a system of linear equations, each of the form: $$\sum_{y_i\in\{x_i,1-x_i\}}{y_i} = 1 $$. For example, the system may consist of the equation $$ x_1+(1-x_2)=1 $$ (here, $$ y_1=x_1, y_2=1-x_2 $$).

This system of equations has some solution over $$\Z$$ iff it has a 0-1 solution (i.e, it has some solution over $$\{0,1\}\subset\Z $$). 213.8.204.30 (talk) 06:43, 12 May 2016 (UTC)


 * I bet it's true for k=2. For k=3, here is a counter-example:
 * a+b+c=1
 * (1-a)+d+(1-d)=1
 * (1-b)+d+(1-d)=1.
 * If you don't like having the same variable twice in a clause, use
 * a+b+c=1
 * (1-a)+d+(1-e)=1
 * (1-b)+d+(1-e)=1
 * d+f+g=1
 * e+f+g=1
 * --JBL (talk) 04:02, 13 May 2016 (UTC)


 * It's a great counterexample! Thank you!


 * What about the following statement: for $$k=3$$, if the system of equations has some solution over $$\Z$$, and it doesn't have any 0-1 solution, then there exists some variable $$x_i$$, such that $$x_i=-1$$ in all the solutions over $$\Z$$ of the system of equations? (that is, if a solution is an assignment to the variables, then in every solution, it holds that xi is assigned the value -1) 213.8.204.65 (talk) 05:07, 13 May 2016 (UTC)


 * That is certainly false, since I could switch c for 1-c everywhere. If you replace "-1" by "-1 or 2" then I don't have any intuition about it, unfortunately. JBL (talk) 12:59, 13 May 2016 (UTC)


 * Thank you! 213.8.204.65 (talk) 06:43, 14 May 2016 (UTC)


 * Ok, here's another example that maybe will make you give up hope ;):
 * a+b+w=1
 * (1-a)+b+x=1
 * a+(1-b)+y=1
 * (1-a)+(1-b)+z=1
 * JBL (talk) 18:52, 14 May 2016 (UTC)

Numerical recipes for fixed point convergence
I have a smooth function $$f:\mathbb{R}^N \rightarrow \mathbb{R}^N$$ with at least one fixed point $$\vec{x_*}$$ such that $$f\left(\vec{x_*}\right) = \vec{x_*}$$. It also has the desirable property that if you take $$\vec{x_{i+1}} = f\left(\vec{x_i}\right)$$ then $$\vec{x_i}\rightarrow \vec{x_*}$$ as $$i\rightarrow \infty$$ for most starting guesses. In other words, it is an attractive fixed point.

My problem is that the function $$f$$ is very computationally expensive to evaluate, requiring about 5 minutes per iteration (using parallel processing on a 16-core workstation). Currently I need to follow the sequence 40 or 50 steps to get a value for $$\vec{x_{i+1}}$$ within the tolerance that I need. Having this process take a few hours would be "okay" except that I'd like to repeat the process many more times for other variants of $$f$$. Hence, I'd like to find a way to accelerate the process of finding $$\vec{x_*}$$ while hopefully only calling $$f$$ 10 or 15 times. I tried using a canned root finding system (on $$\lVert f\left(\vec{x}\right) - \vec{x} \rVert$$) but that converged even more slowly than just following the sequence.

Does anyone have suggestions for numerical recipes to accelerate the process of finding such a fixed point? Though $$f$$ is smooth, it is also safe to say that I don't have any useful prior knowledge of its derivatives, so gradient type methods will have to rely on whatever information can be gained from evaluating $$f$$. Dragons flight (talk) 15:01, 12 May 2016 (UTC)


 * Can you get anything by looking at first differences $$\vec{x_{i+1}}-x_i$$ (or second differences or third...)? Robinh (talk) 19:48, 12 May 2016 (UTC)

As the function is slow to compute, approximate it by some other function which is fast to compute. Bo Jacoby (talk) 08:40, 13 May 2016 (UTC).

Have you tried something from the article about series acceleration? 213.8.204.65 (talk) 09:26, 13 May 2016 (UTC)


 * I've since tried implementing Broyden's method. However, I'm not sure if I have done it correctly as it doesn't seem to be improving the convergence right now.  Dragons flight (talk) 12:42, 13 May 2016 (UTC)
 * I think it depends on how big your N is. If you're talking about R5 then it certainly should be possible to accelerate convergence since after 10 iterations you should be able to estimate the Jacobian pretty accurately. If it's R50 then even if f was linear you wouldn't know the Jacobian until you're already done, but it might be possible to get some useful information about it before then. If R500 then the Jacobian is basically unknown and I think acceleration techniques pretty much rely on it. It also depends on how rapidly the Jacobian changes, otherwise no estimate is going to be useful. Another question is how fast do the iterates converge now; if convergence is already quadratic then you're already doing the best you can with linear methods. One additional thought though, since you're doing this with multiple f's, if it's possible to gauge how similar they are then you might try sequencing them so that similar f's are together, then use the ending point of each f as the starting point for the next one. --RDBury (talk) 14:33, 13 May 2016 (UTC)


 * N is roughly 3,000. I haven't systematically tried to characterize the speed of convergence, but it doesn't feel very fast.  I often observe values move 1 unit at step n, and then 0.8 to 0.9 units in the same direction at step n+1, which would suggest a fairly slow rate of convergence.  As you suggest, I am hopeful that the endpoint of similar f's can be useful for acceleration, but I haven't tried that yet.  Dragons flight (talk) 15:18, 13 May 2016 (UTC)

Infinite product
Let $$f(a) = \prod_{k=2}^{\infty} \frac{k^a - 1}{k^a + 1}, a \geq 1$$. Does $$f(a)$$ have a closed form? What is $$f'(a)$$? 24.255.17.182 (talk) 23:22, 12 May 2016 (UTC)


 * $$\prod_{k=2}^\infty \frac{k^a - 1}{k^a + 1}$$
 * $$=\exp\left(\sum_{k=2}^\infty \log(\frac{1-k^{-a}}{1+k^{-a}})\right)$$
 * $$=\exp\left(\sum_{n=1}^\infty n^{-1}\sum_{k=2}^\infty

\left((-k^{-a})^{n}-(k^{-a})^{n}\right)\right)$$
 * $$= \exp\left(-2\sum_{n=0}^\infty (2n+1)^{-1}\sum_{k=2}^\infty

k^{-a(2n+1)}\right)$$
 * $$=\exp\left(-2\sum_{n=0}^\infty (2n+1)^{-1}(\zeta(2an+a)-1)

\right)$$
 * Bo Jacoby (talk) 09:19, 13 May 2016 (UTC).


 * Some limits, $$f(1) = 0; f(3) = 2/3; f(\infty) = 1; f'(1) = 0; f'(\infty) = 0$$ Dragons flight (talk) 13:39, 13 May 2016 (UTC)