Talk:Jupiter brain

Questions about the article
I do not understand the statement, "While a rigid solid object the size and mass ... could not be built using any known material". I believe that Mercury, Mars, and a host of planetary moons and asteroids would all be "rigid solid objects". Indeed, I believe even Jupiter is supposed to have a solid heavy metal core. As most of the heat in planets is due to the orbital energy of the material which is lost during planetary accretion one should be able to build a "cool" solid object so long as one radiates away any such energy during the construction (or carefully arranges the construction so as to minimize the accumulation of such energy [i.e. all incoming material has very 'soft' landings]).

So long as the mass does not exceed the point at which gravitational attraction would drive fusion (of elements lower atomic number than iron) or a potential collapse into neutronium it should be possible to form a solid object. Now the problem arises when you would like to position computational elements within the solid object because the heat removal capacity is limited by the conduction capacity of the solid matter (and the surface area of the "Brain"). You can devise a liquid cooling system but the pipes (and pumps) are going to have to be able to function at very high pressures present near the core. (Could small multi-wall buckytubes withstand this? I don't know.)

To avoid the problem of circulating large amounts of cooling fluid (which is a waste of energy) it is generally thought that (at least by me) that optimal Jupiter Brain architectures would be rather like Oranges or Apples with all the computational elements in the outer skin. The core could consist of either (a) a huge point-to-multipoint fiber network (preferably "everypoint-to-everypoint") or (b) empty space (perhaps gas-filled to balance the gravity from the skin) which would allow internodal communications based on free space laser, microwave or radio communications (perhaps all in combination).

So long as the body isn't positioned too close to any large gravitational masses there shouldn't be any gravitational forces that might generate tidal heating (think Europa) so there need not be any "heat gradient" or "convection" (assuming you have removed radioactive elements from the construction materials). The article as currently written seems to suggest one is dealing with a modified natural planet with a liquid core rather than a completely engineered planetary size or mass object.

Heat dissipation isn't a problem so long as the energy being radiated doesn't exceed that allowed by the Stefan-Boltzmann Law. That in turn is dependent upon how much power the Jupiter Brain (a) generates; or (b) has delivered to it. Ultimately you are up against the heat limits of nano-based computronium. The benchmark here that the 1cm^3 Drexler rod-logic computer consumes 100,000W. So even a 1cm^3 Jupiter Brain has significant power delivery and heat removal problems. That is what triggers the transition to a Matrioshka Brain -- the power delivery and heat removal are intrinsic aspects of the architecture (and presumably significantly more resource efficient than Jupiter Brain architectures). There may however be specific types of computational problems where ultra-high inter-node communication capacities (even if energy or material usage is wasteful) may be desirable. These would presumably be problems which *have* to be solved in the shortest "real time" period. Matrioshka Brains might have Jupiter Brains dispersed throughout their architectures (throughout a solar system) for solving problems which fall into such categories.

It is also true that Jupiter Brains could primarily be large data stores (think powered down hard drives, flash ram, etc.). As various forms of magnetic, chemical state, etc. storage do not require energy, the only heat they would generate would be that required to accept storage retrieval requests, process them, and deliver the result. Thus the power requirements and heat dissipation requirements could be quite low (provided one is not trying to rapidly read or write large multiples of the total information content of the Jupiter Brain). —The preceding unsigned comment was added by RobertBradbury (talk • contribs) on 21:50, 19 August 2006.


 * To briefly address some of the points you raise:


 * Terrestrial planets aren't rigid. Their interiors move in complicated convection patterns, with shear forces far beyond what rock and metal can withstand. Even the moon, which was once thought to be cold and completely solid, turns out to have parts of its mantle moving by creep. While you could postulate that a carefully constructed artificial world could avoid internal heating and tidal forces, and so not have any stresses that cause this type of deformation, a definition more in line with what one intuitively expects from the term "rigid solid" would be a structure in which the pressure at all points within the body is less than the compressive strength of the materials that compose it (i.e., all deformation is elastic).


 * The idea of having computing elements on the skin of the Jupiter Brain and an optical interconnect forming the bulk of the structure is proposed in one of the artice's references. Mostly this was to maximize communications bandwidth (massively parallel computing systems are communications-limited for almost all tasks). Unless the communication is though vacuum (via free-space optical links), though, you'll get internal heating due to absorption of photons within the fiber bundles. This will increase up to the point where the heat gradient is steep enough for outflux via heat conduction to balance absorption. How quickly this becomes a problem depends on what you assume the communications needs are, but absorption will be very serious for any cable run longer than about a hundred kilometres (long-distance communications cables have repeaters to offset this attenuation, but don't have to worry about the heat dissipation that results).


 * I'm pretty sure that heat dissipation is an even bigger problem than Drexler's numbers indicate. Considering electron-based systems gives a useful way of assessing this. The way I usually estimate this myself is to assume some number of electrons needed to register a node voltage, given that when you count $$n$$ electrons, the statistical error is about $$\sqrt{n}$$. This gives between 10 and 100 electrons, depending on how conservative you want to be. The voltage change needed depends on temperature, being equal to about $$4kT$$ to overcome thermal noise. This gives about 100 mV for room-temperature operation, about 1 mV at cosmic-background temperatures, or about 1V for white-hot diamond (say about about 3000 K). Combining these numbers gives energy per bit flip in the internal state of the computer (0.01/1/10 eV per bit flip, for the three temperature ranges under consideration). Using mechanical or optical systems instead of electrical systems gives you roughly the same numbers, because the energy gap between the states being distinguished has to be about $$4kT$$ or greater, and the number of events being counted has to be large enough for statistical fluctuations to even out. What this gives you in power per unit mass tends to vary, but power per unit surface area of the brain is easy to calculate if you assume cooling is purely radiative. For 300 K, you have about 1e+3 W/m^2, giving 6e+21 bit flips per second per m^2 at 1 eV, and for 3000 K, you have about 1e+7 W/m^2, giving 6e+24 bit flips per second per m^2 at 10 eV. This can be implemented in a film microns to nanometres thick, depending on your assumptions about node volume and clock rate. Any thicker, and you can store more information, but not think any faster.


 * I agree that data storage is a more forgiving application, but energy has to be spent to correct errors that accumulate. The rate of this depends on how much above $$kT$$ the state gap is (for thermal disturbances), and has a lower limit that depends on the amount of cosmic radiation permeating the structure, and on the level of trace radioactivity of the structure's materials. Quantum tunnelling can also induce errors in very compact systems (node patterns with the same number of high and low bits have the same energy, and with fewer bits high are actually energetically favourable to tunnel to, with a barrier width that depends on the system geometry).


 * In the end, though, the article can only contain claims that were made in external references. It's actually pretty shaky on that even in its current, stubby form. --Christopher Thomas 23:08, 19 August 2006 (UTC)

--- The points made by Chris are valid. One can of course have planetary sized "solid" objects. The problem arises when one wants to construct a computer of planetary size and then the fundamental problem becomes heat dissipation which is of course limited by surface area. Drexler and Henson addressed this to some degree with their patent regarding optimal cooling design. [See Nanosystems, pgs 331-332.] There is also the cost and waste of energy in circulating a cooling fluid. So in my mind a "Jupiter Brain" is a relatively small object with all of the computing elements on the surface and with the interior only dealing with communication. The computations Chris offers may actually limit the size of such a Jupiter Brain. And it may not even approach the size of Jupiter. It would be useful for someone to work these out in detail to provide realistic constraints. But bear in mind there are some other limits with respect to how much energy you can pre-load into a Jupiter Brain, how much energy you can deliver to it and its ongoing heat output (which is subject to the creation of methods of reversible computation). The basic thing to bear in mind is that a Jupiter Brain is a "relatively" solid object constrained by the laws of physics, while a Matrioshka Brain is a dispersed object operating at the limits of physical laws.

Robert (talk) 17:30, 22 March 2009 (UTC)Robert Bradbury