Wikipedia:Reference desk/Archives/Mathematics/2008 September 14

= September 14 =

Validity of proof
Is this proof valid? If so, can you help me find a reference for it on the Internet? --Bowlhover (talk) 02:00, 14 September 2008 (UTC)


 * What's highlighted in green is accurate, and is a straightforward enough application of basic principles that it doesn't need a reference. As a formatting consideration you might try taking the fractions out of their math brackets and just writing them normally, p2/q2. Black Carrot (talk) 05:50, 14 September 2008 (UTC)


 * "Since all numbers which can be represented by p/q are by definition rational, the above implies that all rational square roots are roots of a ratio of two perfect squares," should be rephrased somewhat, the implication is in the wrong direction. Maybe, "Since all rational numbers can by definition be represented in the form p/q, the above implies that all rational square roots are roots of a ratio of two perfect squares." Black Carrot (talk) 05:54, 14 September 2008 (UTC)


 * The proof does depend on the unstated assumption that gcd(p2,q2)=1 if gcd(p,q)=1. This is true in a unique factorization domain such as Z, but is not true in all integral domains. Other proofs make this dependence clear by using unique factorisation directly. Gandalf61 (talk) 08:36, 14 September 2008 (UTC)


 * Yep I agree with both contributors above, the proof is flawed. The sentence "p2/q2 can be assumed to be in lowest terms" is where it all falls down but it was heading the wrong way from the beginning. Dmcq (talk) 08:50, 14 September 2008 (UTC)


 * Is it necessary to deal with any numbers except integers? A perfect square and its square root must be integers, and p and q can be assumed to be.  --Bowlhover (talk) 14:27, 14 September 2008 (UTC)

If I may interject two points. First of all, I find the section too detailed for inclusion in the main article. Perhaps there is enough material scattered about to start an article on the square root of rational numbers. There are a number of proofs indicated in the square root of 2 article, and I would rather see the section in irrational number kept brief and in summary style. Second, the lengthy proof is not necessary. One can prove consider]ably more in much less space by appealing to Gauss's lemma (for which there already is an article): Any rational number which is an algebraic integer is a (rational) integer. siℓℓy rabbit (  talk  ) 14:51, 14 September 2008 (UTC)


 * Richard Dedekind's proof in seems a good one to me. It doesn't depend on proving the Fundamental Theorem first and is fairly short, it only depends on numbers being ordered. Dmcq (talk) 08:16, 16 September 2008 (UTC)

Would like a math expert to look over my experimentation with a simple physics equation
$$ f_k = \frac{joules}{meter} = \frac{d}{dx} E_{th} $$


 * Frictional Force $$= f_k = Newtons = Joules/meter = Force $$ shows that the kinetic frictional force is the derivative wrt x of internal kinetic energy of an isolated system as the block undergoes a displacement, x. Since $$ f_k $$ is a constant, and is equal to $$ \mu_k\, $$*N (where N = the normal force) then $$\mu_k\,$$ can be thought of as...

Help me from here, please. I want to continue the line of thinking that $$f_k\,$$ is a derivative of internal kinetic energy.

A month ago, I realized that $$\mu_k\,$$ is a percent relative the kinetic frictional force to the Normal Force, and this insight really helped me understand everything in physics much better. i.e. $$ \mu_k\,$$=.40 means that of the Normal force, the kinetic frictional force is about 40% the Normal force.

I also really liked the algebraic proof that $$\mu_s\,$$ is the solution to the equation $$ \mu_s = \tan \alpha \,$$ where alpha is the angle of incline which a block begins to slide, from rest position.

So today, I'm trying to round out my collection and would like an insightful way to represent an intuitive relationship between $$ \mu_k \,$$ to work or internal-kinetic-energy.

So, ignoring angles of incline, and ignoring mass and gravity (let them be 0, m1, and g all respectively) is there any way for me to algebraically rearrange variables or shuffle equations to get a neat relation between \mu_k and work or internal kinetic energy? I haven't found one, so if one doesn't exist, are there any other outside the box ways to look at these relationships between variables?

When I usually try to creatively play with formulas, if the term is $$ {\mu_k}^2 $$ or something like a difference of terms, such as-- $$ {\mu_k} $$ initial minus $$ {\mu_k} $$ final, terms like these have more flexibility and you can apply more algebra to get more new equations from old ones.

Thanks in advance for any input. My goal is to learn physics slowly and very analytically, always looking for ways to inject redundant mathematical thinking into my notes that I keep. Sentriclecub (talk) 12:23, 14 September 2008 (UTC)


 * addendum from the kinetic friction page...


 * When an object is pushed along a surface, the energy converted to heat is given by:


 * $$E = \mu_\mathrm{k} \int F_\mathrm{n}(x) dx\,$$

where
 * Fn is the normal force,
 * μk is the coefficient of kinetic friction,
 * x is the coordinate along which the object transverses.


 * I may have wrongly presumed that calculus leads to nowhere. (I'm stronger in algebra than calculus) Sentriclecub (talk) 12:29, 14 September 2008 (UTC)

numerical analysis
plz help me solve this example with detailed solution. Find a root of the equation x sin x + cos x = 0, using Newton-Raphson method. Miral b (talk) 12:42, 14 September 2008 (UTC) Retrieved from "http://en.wikipedia.org/wiki/Wikipedia:Reference_desk/Mathematics" Hidden category: Non-talk pages that are automatically signed


 * Rearranging the equation gives x = &minus;cot(x). Looking at the graphs of y = x and y = &minus;cot(x) you can see that there are an infinite number of solutions. As you only need to find one solution, you could find the solution that lies between 3π/4 (where x sin(x) + cos(x) = 0.9588...) and π (where x sin(x) + cos(x) = -1) - I am assuming x is in radians here. Starting from either of these initial values, Newton-Raphson converges quickly to a solution, giving at least 8 decimal places in just 4 iterations. I set it up in Excel - alternatively you could use your favourite programming language, or do it with a calculator in less than 10 minutes. Gandalf61 (talk) 13:25, 14 September 2008 (UTC)

The function $$f(x)=x\sin x+\cos x$$ is an even function: $$f(-x)=f(x)$$. The Taylor series
 * $$x \sin x + \cos x=1-\sum_{k=1}^\infty \frac{(-x^2)^k} {2(k+1)(2k)!} = 1+\frac 1 2 x^2-\frac 1 8 x^4+\frac 1 {144} x^6-\frac 1 {5760}x^8+ \cdots$$

contains only even powers of x. So it is simpler to set $$t=x^2$$ and solve the nth degree equation
 * $$1-\sum_{k=1}^n \frac{(-t)^k } {2(k+1)(2k)!} = 0$$

for some sufficiently big value of n.

The two smallest solutions to x sin x + cos x = 0 are
 * $$x\approx\pm 1.19968i$$

The smallest real solutions are
 * $$x\approx \pm 2.79839$$

In order to reach this accuracy you need n &ge; 9. The J-code used is %:_1 _2{1{>p.((*1&o.)+2&o.)t.2*i.9, and the result produced is 0j1.19968 2.79839. Bo Jacoby (talk) 22:14, 14 September 2008 (UTC)

OK, you need to find when x*sin(x) + cos(x) = 0. Find when cos is negative and when sine is positive. For instance in the second quadrant. Take a value in the second quadrant reasonably close to pi but also reasonably close to 3pi/2 as the first approximation (since the 'x' in x*sin(x) increases sin(x) considerably). Then apply the formulae.

Topology Expert (talk) 13:23, 17 September 2008 (UTC)


 * Cos and sine are real-valued functions so you cannot speak of an imaginary solution. They aren't (for complex arguments), and sure you can. Please don't propagate these kinds of misconceptions. Fredrik Johansson 08:02, 20 September 2008 (UTC)

Is impulse related to work?
When an object is pushed along a surface, $$\mu_\mathrm{k}\,$$ is the ratio of the energy converted into heat vs this denominator:

$$\mu_\mathrm{k}=\frac{E_{th}}{\int F_\mathrm{n}(x) dx\,}$$

How do i verbally say the denominator? Is it the indefinite integral of the Normal Force (as a function of x) with respect to x?

I am picturing an example. A constant force is applied to block A which slides 2 meters on flat surface, then up a 60 degree incline for the final 8 meters.

I see a lot of new ways to further explore uses of $$\mu_\mathrm{k}\,$$ now, but needed to know what the denominator is in spoken words.

If this block is pulled by a string in such a way that the block has constant velocity of 1 meters per second (and the string always pulls parallel to the block's momentum), then I could integrate the normal force as a function of time, then convert it into Joules. This would involve solving for the impulse, then converting an impulse into work. I never covered subjects from Calc_II. Is it straightforward to convert an impulse to an amount of work, given this example? I have read the articles on these subjects, but they get too complicated too fast, and I get overwhelmed. I should be able to better attempt my first question if I am not afraid of making a calculus goof. Sentriclecub (talk) 13:28, 14 September 2008 (UTC)


 * For the denominator, you can just say "the integral of the normal force with respect to x" in most cases. Saying "integrating with respect to x" usually implies that every non-constant term in the integrand is somehow a function of x. You shouldn't really say "indefinite integral" in this context since, strictly speaking, the integral should have lower and upper limits (respectively, the initial and final positions of the object being moved). In the specific example you gave, it is very easy to convert the impulse into work since you know everything about the applied force -- in fact, if you use all SI units, the two quantities are numerically equal since you set the velocity to be 1 m/s. With Calc_1 knowledge, you should be able to quickly verify this using the equations $$J = \int F dt$$ and $$W = \int F dx$$ where J is the impulse and W is work. The complete answer to your question of whether there is a general relationship between impulse and work is more complicated. You definitely need to know the actual masses of the objects in question. I might explain this more in depth later, but for now I'll give you some helpful information that may lead you to work out the complete answer yourself. First, note that $$K = \frac{p^2}{2m}$$ where K is the kinetic energy and p is the momentum. Second, review the work-energy theorem, which states that the total work done on an object is equal to the object's change in kinetic energy. 97.90.132.94 (talk) 07:08, 15 September 2008 (UTC)


 * Thanks, and I hope you'll explain more, after a few days (link now available from my user page ). I have a huge love for mathematics.  I spend additional time, when learning physics, to try under understand the equations from a extremely mathematical point of view.  I love working through proofs, and I will start working immediately on the part you gave me.  That term with momentum^2, I've never seen before, so I'll jump right on it.  I'm self taught calculus, so if I'll need more calculus knowledge to understand it, then thats a helluva motivation!  Thanks very much! Sentriclecub (talk) 14:56, 15 September 2008 (UTC)


 * The $$K = p^2/2m$$ equation is just an extrapolation from p = mv. Anyway, I should probably revise what I said earlier. As far as I can tell, there aren't any particuarly useful relationships between impulse and work (and I stress the word useful). Nevertheless, here is a helpful tip in case you haven't discovered this yet: sometimes you will need to use the equations $$J = mv - mv_0$$ and $$W_{net} = \frac{1}{2}mv^2 - \frac{1}{2}mv_0^2$$ and notice that $$v^2 - v_0^2 = (v+v_0)(v-v_0)$$. Sorry if I might have wasted your time with my earlier statement. You would still profit from learning more about the work-energy theorem, though. 97.90.132.94 (talk) 19:16, 16 September 2008 (UTC)


 * Thanks, actually I did solve my question. J * (v_avg) = W Sentriclecub (talk) 11:58, 18 September 2008 (UTC)

earliest date of use or description
I began using computers to verify results of logical equations I solved previously by hand in about 1963. Then in about 1978 I found a computer method for reducing logical equations to minimum form in “Digital/Logic Electronics Handbook” by William L Hunter (pages 112-113, Tab Books, Blue Ridge summit, PA) ISBN 0830657740 ISBN 0830657746 ISBN 9780830657742 called the Harvard chart Method of logical equation reduction. In 1981, I published a modification of the method to reduce multi-valued equations to minimum form as a computer program here. In doing the modification I may have become unconsciously aware of the ability to count and to sort sets and multisets using an indexed array as demonstrated here.

Nonetheless, I did not become consciously aware of this method until at least 1995, followed by its copyright and online publication in 1996 here and again in 2006 here. Consequently, I am searching for any publication prior to my own which might describe or demonstrate this method to count and to sort sets and multisets or to find the date the Counting sort was first described or published with its original definition found here and the date the Pigeonhole sort was first described or published with it's original definition found here. 71.100.10.11 (talk) 14:04, 14 September 2008 (UTC)
 * I looked at your BASIC program and am puzzled by the statement 20 a(a)=a(a)+1 . a cannot be an array and an index at the same time. The claim that your sorting routine is the fastest in the world is not proved. Usually the speed of sorting routines is measured by the asymptotical behavior of the time consumption as a function of file size when the file size is big. Bo Jacoby (talk) 11:11, 15 September 2008 (UTC).
 * I think the original program was written in Zbasic. Yes, I know that speed claim is based on assumption since for one thing I did not create Instant sort, the hardware version, until Nov. 1996. ~ WP:IAR 71.100.4.227 (talk) 16:49, 15 September 2008 (UTC)
 * In fact I just pulled out an old copy of zbasic and while it has no problem with an index of the same name as an array is does require that you dimension the array, unlike earlier versions of interpreter Basic which allowed up to ten dimensions before requiring a dim statement. 71.100.4.227 (talk) 18:13, 15 September 2008 (UTC)
 * I seem to half remember that in some early variants of BASIC (BBC Basic?) ordinary variables, typed variables functions and names all had separate address spaces, so you could do something like that (and have the string A$, integer A% and so on to do something else with). With the source in memory and 16k to play with people would actually use all the single-character variable names they could in a large program. Edit: Yes, I found a reference:


 * Note that in BBC BASIC the different types of variable are completely independent. For example, the integer variable list%, the numeric variable list, the string variable list$ and the arrays list%, list and list$ are all entirely separate.


 * From -- Q Chris (talk) 12:57, 15 September 2008 (UTC)


 * Please note that 71.100.*.* has a history of creating original research articles such as Articles for deletion/Rapid sort and Articles for deletion/Optimal classification. See also Wikipedia talk:Reference desk/Archive 26 for repeated trolling of reference desks. --Jiuguang (talk) 17:20, 15 September 2008 (UTC)


 * Although both articles were moved to here and here and the latter is now listed by the National Institute of Standards and Technology here and are quite happy at their new homes, who would ever have guessed that user Jiuguang Wang is a stalker? Other users be warned. 71.100.4.227 (talk) 18:17, 15 September 2008 (UTC)