User:Bci2

My sandbox

{{/box-footer|[[Portal:Physics/{{CURRENTYEAR}}

{{Shortcut|WP:BBL|WP:BABEL}} {{Babel|en-5|fr-4|it-4|la-3|ro}}

{{User WPBiography}}

/Sandbox2

Headline text
A Main Page for Bci2 Editing

The List of Articles Initiated/Generated on Wikipedia by user Bci2

 * Molecular models of DNA

/Sandbox

Major Projects
Block quote {| class="wikitable" border="1" Testing inline linking

References and notes
.

The easy link with no numbers, but bullets: link title

--Nu 08:13, 2 October 2008 (UTC)signature squiggle

Strike-through text
 * 1) REDIRECT Insert Text ?? what does this mean?

The extra space tags, or "carriage return"

Superscript text Subscript text Small Text

IMAGE addition with png filename added in :

? Block quote {| class="wikitable" border="1"

Wikipedia guidelines for

"inline citations"
OR Footnotes that are key to " on wiki as quoted below from this source:

"Footnotes serve two purposes. First, they are used to add material that explains a point in greater detail, particularly if the explanation would be distracting if written out in the main article. Second, they are used to cite the reliable sources that support an assertion in the main article. This is known as an "inline citation". Two different types of footnotes may be used for these two different purposes, as described below.

The prevailing system for adding footnotes to an article involves the. Different classes of footnotes can be defined within an article using the "group" parameter inside the "ref" tag, as described below.

Editors may also use the older system of template-based footnotes, such as Ref label and note label. These have the disadvantage that they are not numbered automatically; the editor has to choose a specific label. It is generally expected that footnotes will be labeled in the order in which they occur in the text. Therefore, if an editor adds such a template-based footnote in the middle of an article, the editor should also renumber/increment all the subsequent footnotes of the same type, by hand.

Footnotes are not the only way to cite sources. Alternative methods are embedded links and Harvard referencing (also known as author-date or parenthetical referencing). For more information, see Wikipedia:Citing sources, the main style guide on citations.

A key content policy, Wikipedia:Verifiability, says that any material that is challenged or likely to be challenged, including any contentious material about living people, and all quotations, must have a source. Unsourced or poorly sourced material may be removed from any article, and if it is, the burden of proof is on the editor who wishes to restore it."

DNA Structure
Hello, Tim. I 'm also afraid that the current DNA article version has a basic flaw in the over-simplified presentation of A-DNA structure and B-DNA configurations in regards to the interpretation of the corresponding X-ray patterns. This could have been corrected by adding relevant references of published papers that however is being prevented by the over-protection of this entry through locking up the editing. At the very least a new section could be added up that allows the correction of this basic flaw which places in doubt the scientific value of ``comparisons" between A-DNA and what is commonly still called the `B-DNA' configuration set. Furthermore, there are several factual errors also about the conditions under which A-DNA X-ray diffraction patterns have been generated, not to mention the lack of adequate credit to the experimenters involved. For an improved entry such comments with the introduction of the appropriate literature references should help. Bci2 (talk) 10:27, 4 May 2009 (UTC).


 * Would it be possible for you to also look over Mechanical properties of DNA, which tried to deal with DNA structure in a much more rigorous way than the parent article. However, this isn't one I've spent any time editing, so I can't vouch for the contents. Tim Vickers (talk) 22:44, 4 May 2009 (UTC)

I've renamed this article DNA structure and used your additions to the DNA article to start a section on the determination of DNA structures. Would it be possible for you to expand this? We don't have such a space limitation here as in the general DNA article, and you can write at a bit of a higher level. Tim Vickers (talk) 15:39, 5 May 2009 (UTC)


 * Hi, Tim: Sounds like a great idea-- why didn't I think of it (rhetorical quaestion!). Will add the required explanation as my time permits, as it involves a lot more referencing and explanation, but it is obviously of the essence. You are correct--the interests of science are best served by cooperation, as everyone stands to benefit from it. I'm glad that you see the point of the X-ray patterns comparison that I was initially trying to introduce in the DNA entry. Until we have the more complete story I am however suggesting that we should keep the minimal explanation in place so that new readers are not being misled by the 50 year old literature.Bci2 (talk) 12:27, 5 May 2009 (UTC).


 * The section in the DNA article is still a lot too technical for our audience, if I'm struggling to understand what it means, and I studied DNA structure and topology as part of my first degree, then I don't think a high-school student will be able to get anything from the text. For example, some of the text says that the patterns analyzed come from "hydrated, bacterial oriented DNA fibers and trout sperm heads" - but I have no idea what this is referring to. Tim Vickers (talk) 17:47, 5 May 2009 (UTC)

Sign
Nu 16:52, 12 March 2009 (UTC)Bci2 Nu 17:57, 2 March 2009 (UTC)Bci2Nu 17:57, 2 March 2009 [UTC];WP:PHYS ; WP:WPM

Category theory
http://en.wikipedia.org/wiki/Category_Theory

Ronald Brown (mathematician)
Completed and styled

http://en.wikipedia.org/wiki/Ronald_Brown_(mathematician)

Quark
http://en.wikipedia.org/wiki/Quark

Nuclear magnetic resonance
[NMR]

2D-FT NMRI and Spectroscopy
http://en.wikipedia.org/wiki/2D-FT_NMRI_and_Spectroscopy

2D-FT NMRI and Spectroscopy 2D-FT Nuclear Magnetic resonance imaging (2D-FT NMRI), or Two-dimensional Fourier transform magnetic resonance imaging (NMRI), is primarily a non--invasive imaging technique most commonly used in biomedical research and medical radiology/nuclear medicine/MRI to visualize structures and functions of the living systems and single cells. For example it can provides fairly detailed images of a human body in any selected cross-sectional plane, such as longitudinal, transversal, sagital, etc. NMRI provides much greater contrast especially for the different soft tissues of the body than computed tomography (CT) as its most sensitive option observes the nuclear spin distribution and dynamics of highly mobile molecules that contain the naturally abundant, stable hydrogen isotope 1H as in plasma water molecules, blood, disolved metabolites and fats. This approach makes it most useful in cardiovascular,  oncological (cancer), neurological (brain), musculoskeletal, and cartilage imaging. Unlike CT, it uses no ionizing radiation, and also unlike nuclear imaging it does not employ any radioactive isotopes. Some of the first MRI images reported were published in 1973 and the first study performed on a human took place on July 3, 1977. Earlier papers were also published by Peter Mansfield in UK (Nobel Laureate in 2003), and R. Damadian in the USA, (together with an approved patent for magnetic imaging). Unpublished `high-resolution' (50 micron resolution) images of other living systems, such as hydrated wheat grains, were obtained and communicated in UK in 1977-1979, and were subsequently confirmed by articles published in Nature.

NMRI Principle
Certain nuclei such as 1H nuclei, or `fermions' have spin-1/2, because there are two spin states, referred to as "up" and "down" states. The nuclear magnetic resonance absorption phenomenon occurs when samples containing such nuclear spins are placed in a static magnetic field and a very short radiofrequency pulse is applied with a center, or carrier, frequency matching that of the transition between the up and down states of the spin-1/2 1H nuclei that were polarized by the static magnetic field. Very low field schemes have also been recently reported.

Chemical Shifts
NMR is a very useful family of techniques for chemical and biochemical research because of the chemical shift; this effect consists in a frequency shift of the nuclear magnetic resonance for specific chemical groups or atoms as a result of the partial shielding of the corresponding nuclei from the applied, static external magnetic field by the electron orbitals (or molecular orbitals) surrounding such nuclei present in the chemical groups. Thus, the higher the electron density surounding a specific nucleus the larger the chemical shift will be. The resulting magnetic field at the nucleus is thus lower than the applied external magnetic field and the resonance frequencies observed as a result of such shielding are lower than the value that would be observed in the absence of any electronic orbital shielding. Furthermore, in order to obtain a chemical shift value independent of the strength of the applied magnetic field and allow for the direct comparison of spectra obtained at different magnetic field values, the chemical shift is defined by the ratio of the strength of the local magnetic field value at the observed (electron orbital-shielded) nucleus by the external magnetic field strength, Hloc/ H0. The first NMR observations of the chemical shift, with the correct physical chemistry interpretation, were reported for 19F containing compounds in the early 1950's by Herbert S. Gutowsky and Charles P. Slichter from the University of Illinois at Urbana (USA).

NMR Imaging Principles
A number of methods have been devised for combining magnetic field gradients and radiofrequency pulsed excitation to obtain an image. Two major maethods involve either 2D -FT or 3D-FT reconstruction from projections, somewhat similar to Computed Tomography, with the exception of the image interpretation that in the former case must include dynamic and relaxation/contrast enhancement information as well. Other schemes involve building the NMR image either point-by-point or line-by-line. Some schemes use instead gradients in the rf field rather than in the static magnetic field. The majority of NMR images routinely obtained are either by the Two-Dimensional Fourier Transform (2D-FT) technique (with slice selection), or by the Three-Dimensional Fourier Transform (3D--FT) techniques that are however much more time consuming at present. 2D-FT NMRI is sometime called in common parlance a "spin-warp". An NMR image corresponds to a spectrum consisting of a number of `spatial frequencies' at different locations in the sample investigated, or in a patient. A two–dimensional Fourier transformation of such a "real" image may be considered as a representation of such "real waves" by a matrix of spatial frequencies known as the k–space. We shall see next in some mathematical detail how the 2D-FT computation works to obtain 2D-FT NMR images.

Two-dimensional Fourier transform imaging and spectroscopy
A two-dimensional Fourier transform (2D-FT) is computed numerically or carried out in two stages, both involving `standard', one-dimensional Fourier transforms. However, the second stage Fourier transform is not the inverse Fourier transform (which would result in the original function that was transformed at the first stage), but a Fourier transform in a second variable-- which is `shifted' in value-- relative to that involved in the result of the first Fourier transform. Such 2D-FT analysis is a very powerful method for both NMRI and two-dimensional nuclear magnetic resonance spectroscopy (2D-FT NMRS) that allows the three-dimensional reconstruction of polymer and biopolymer structures at atomic resolution]]. for molecular weights (Mw) of dissolved biopolymers in aqueous solutions (for example) up to about 50,000 Mw. For larger biopolymers or polymers, more complex methods have been developed to obtain limited structural resolution needed for partial 3D-reconstructions of higher molecular structures, e.g. for up 900,000 Mw or even oriented microcrystals in aqueous suspensions or single crystals; such methods have also been reported for in vivo 2D-FT NMR spectroscopic studies of algae, bacteria, yeast and certain mammalian cells, including human ones. The 2D-FT method is also widely utilized in optical spectroscopy, such as 2D-FT NIR hyperspectral imaging (2D-FT NIR-HS), or in MRI imaging for research and clinical, diagnostic applications in Medicine. In the latter case, 2D-FT NIR-HS has recently allowed the identification of single, malignant cancer cells surrounded by healthy human breast tissue at about 1 micron resolution, well-beyond the resolution obtainable 2D-FT NMRI for such systems in the limited time available for such diagnostic investigations (and also in magnetic fields up to the FDA approved magnetic field strength H0 of 4.7 T, as shown in the top image of the state-of-the-art NMRI instrument). A more precise mathematical definition of the `double' (2D) Fourier transform involved in both 2D NMRI and 2D-FT NMRS is specified next, and a precise example follows this generally accepted definition.

2D-FT Definition
A 2D-FT, or two-dimensional Fourier transform, is a standard Fourier transformation of a function of two variables, $$f(x_1, x_2)$$, carried first in the first variable $$x_1$$, followed by the Fourier transform in the second variable $$x_2$$ of the resulting function $$F(s_1, x_2)$$. Note that in the case of both 2D-FT NMRI and 2D-FT NMRS the two independent variables in this definition are in the time domain, whereas the results of the two successive Fourier transforms have, of course, frequencies as the independent variable in the NMRS, and ultimately spatial coordinates for both 2D NMRI and 2D-FT NMRS following computer structural recontructions based on special algorithms that are different from FT or 2D-FT. Moreover, such structural algorithms are different for 2D NMRI and 2D-FT NMRS: in the former case they involve macroscopic, or anatomical structure detrmination, whereas in the latter case of 2D-FT NMRS the atomic structure reconstruction algorithms are based on the quantum theory of a microphysical (quantum) process such as nuclear Overhauser enhancement NOE, or specific magnetic dipole-dipole interactions between neighbor nuclei.

Example 1
A 2D Fourier transformation and phase correction is applied to a set of 2D NMR (FID) signals :$$s(t_1, t_2)$$ yielding a real 2D-FT NMR `spectrum' (collection of 1D FT-NMR spectra) represented by a matrix S whose elements are
 * $$S(\nu_1,\nu_2) = \textbf{Re} \int \int cos(\nu_1 t_1)exp^{(-i\nu_2 t_2)} s(t_1, t_2)dt_1 dt_2$$

where :$$\nu_1$$ and :$$\nu_2$$ denote the discrete indirect double-quantum and single-quantum(detection) axes, respectively, in the 2D NMR experiments. Next, the \emph{covariance matrix} is calculated in the frequency domain according to the following equation
 * $$ C(\nu_2', \nu_2) = S^T S = \sum_{\nu^1}[S(\nu_1,\nu_2')S(\nu_1,\nu_2)],$$ with :$$\nu_2, \nu_2'$$ taking all possible single-quantum frequency values and with the summation carried out over all discrete, double quantum frequencies :$$\nu_1$$.

Example 2
Atomic Structure from 2D-FT STEM Images of electron distributions in a high-temperature cuprate superconductor `paracrystal' reveal both the domains (or `location') and the local symmetry of the 'pseudo-gap' in the electron-pair correlation band responsible for the high--temperature superconductivity effect (obtained at Cornell University). So far there have been three Nobel prizes awarded for 2D-FT NMR/MRI during 1992-2003, and an additional, earlier Nobel prize for 2D-FT of X-ray data (`CAT scans'); recently the advanced possibilities of 2D-FT techniques in Chemistry, Physiology and Medicine received very significant recognition.

Brief explanation of NMRI diagnostic uses in Pathology
As an example, a diseased tissue such as a malign tumor, can be detected by 2D-FT NMRI because the hydrogen nuclei of molecules in different tissues return to their equilibrium spin state at different relaxation rates, and also because of the manner in which a malign tumor spreads and grows rapidly along the blood vessels adjacent to the tumor, also inducing further vascularization to occur. By changing the pulse delays in the RF pulse sequence employed, and/or the RF pulse sequence itself, one may obtain a `relaxation--based contrast', or contrast enhancement between different types of body tissue, such as normal vs. diseased tissue cells for example. Excluded from such diagnostic observations by NMRI are all patients with ferromagnetic metal implants, (e.g., cochlear implants), and all cardiac pacemaker patients who cannot undergo any NMRI scan because of the very intense magnetic and RF fields employed in NMRI which would strongly interfere with the correct functioning of such pacemakers. It is, however, conceivable that future developments may also include along with the NMRI diagnostic treatments with special techniques involving applied magnetic fields and very high frequency RF. Already, surgery with special tools is being experimented on in the presence of NMR imaging of subjects.Thus, NMRI is used to image almost every part of the body, and is especially useful for diagnosis in neurological conditions, disorders of the muscles and joints, for evaluating tumors, such as in lung or skin cancers, abnormalities in the heart (especially in children with hereditary disorders), blood vessels, CAD, atherosclerosis and cardiac infarcts (courtesy of Dr. Robert R. Edelman)

Related Wikipedia websites

 * Medical imaging
 * Computed tomography
 * Magnetic resonance microscopy
 * Fourier transform spectroscopy
 * FT-NIRS
 * Magnetic resonance elastography


 * Nuclear magnetic resonance (NMR)
 * Chemical shift
 * Relaxation
 * Robinson oscillator
 * Earth's field NMR (EFNMR)
 * Rabi cycle

This article incorporates material by the original author from 2D-FT MR- Imaging and related Nobel awards on PlanetPhysics, which is licensed under the GFDL.

NMRI Principle
Certain nuclei such as 1H nuclei, or `fermions' have spin-1/2, because there are two spin states, referred to as "up" and "down" states. The nuclear magnetic resonance absorption phenomenon occurs when samples containing such nuclear spins are placed in a static magnetic field and a very short radiofrequency pulse is applied with a center, or carrier, frequency matching that of the transition between the up and down states of the spin-1/2 1H nuclei that were polarized by the static magnetic field. Very low field schemes have also been recently reported.

A number of methods have been devised for combining magnetic field gradients and radiofrequency pulsed excitation to obtain an image. Two major maethods involve either 2D -FT or 3D-FT reconstruction from projections, somewhat similar to Computed Tomography, with the exception of the image interpretation that in the former case must include dynamic and relaxation/contrast enhancement information as well. Other schemes involve building the NMR image either point-by-point or line-by-line. Some schemes use instead gradients in the rf field rather than in the static magnetic field. The majority of NMR images routinely obtained are either by the Two-Dimensional Fourier Transform (2D-FT) technique (with slice selection), or by the Three-Dimensional Fourier Transform (3D--FT) techniques that are however much more time consuming at present. 2D-FT NMRI is sometime called in common parlance a "spin-warp". An NMR image corresponds to a spectrum consisting of a number of `spatial frequencies' at different locations in the sample investigated, or in a patient. A two–dimensional Fourier transformation of such a "real" image may be considered as a representation of such "real waves" by a matrix of spatial frequencies known as the k–space. We shall see next in some mathematical detail how the 2D-FT computation works to obtain 2D-FT NMR images.

Two-dimensional Fourier transform imaging and spectroscopy
A two-dimensional Fourier transform (2D-FT) is computed numerically or carried out in two stages, both involving `standard', one-dimensional Fourier transforms. However, the second stage Fourier transform is not the inverse Fourier transform (which would result in the original function that was transformed at the first stage), but a Fourier transform in a second variable-- which is `shifted' in value-- relative to that involved in the result of the first Fourier transform. Such 2D-FT analysis is a very powerful method for both NMRI and two-dimensional nuclear magnetic resonance spectroscopy (2D-FT NMRS) that allows the three-dimensional reconstruction of polymer and biopolymer structures at atomic resolution]]. for molecular weights (Mw) of dissolved biopolymers in aqueous solutions (for example) up to about 50,000 Mw. For larger biopolymers or polymers, more complex methods have been developed to obtain limited structural resolution needed for partial 3D-reconstructions of higher molecular structures, e.g. for up 900,000 Mw or even oriented microcrystals in aqueous suspensions or single crystals; such methods have also been reported for in vivo 2D-FT NMR spectroscopic studies of algae, bacteria, yeast and certain mammalian cells, including human ones. The 2D-FT method is also widely utilized in optical spectroscopy, such as 2D-FT NIR hyperspectral imaging (2D-FT NIR-HS), or in MRI imaging for research and clinical, diagnostic applications in Medicine. In the latter case, 2D-FT NIR-HS has recently allowed the identification of single, malignant cancer cells surrounded by healthy human breast tissue at about 1 micron resolution, well-beyond the resolution obtainable 2D-FT NMRI for such systems in the limited time available for such diagnostic investigations (and also in magnetic fields up to the FDA approved magnetic field strength H0 of 4.7 T, as shown in the top image of the state-of-the-art NMRI instrument). A more precise mathematical definition of the `double' (2D) Fourier transform involved in both 2D NMRI and 2D-FT NMRS is specified next, and a precise example follows this generally accepted definition.

2D-FT Definition
A 2D-FT, or two-dimensional Fourier transform, is a standard Fourier transformation of a function of two variables, $$f(x_1, x_2)$$, carried first in the first variable $$x_1$$, followed by the Fourier transform in the second variable $$x_2$$ of the resulting function $$F(s_1, x_2)$$. Note that in the case of both 2D-FT NMRI and 2D-FT NMRS the two independent variables in this definition are in the time domain, whereas the results of the two successive Fourier transforms have, of course, frequencies as the independent variable in the NMRS, and ultimately spatial coordinates for both 2D NMRI and 2D-FT NMRS following computer structural recontructions based on special algorithms that are different from FT or 2D-FT. Moreover, such structural algorithms are different for 2D NMRI and 2D-FT NMRS: in the former case they involve macroscopic, or anatomical structure detrmination, whereas in the latter case of 2D-FT NMRS the atomic structure reconstruction algorithms are based on the quantum theory of a microphysical (quantum) process such as nuclear Overhauser enhancement NOE, or specific magnetic dipole-dipole interactions between neighbor nuclei.

Example 1
A 2D Fourier transformation and phase correction is applied to a set of 2D NMR (FID) signals :$$s(t_1, t_2)$$ yielding a real 2D-FT NMR `spectrum' (collection of 1D FT-NMR spectra) represented by a matrix S whose elements are
 * $$S(\nu_1,\nu_2) = \textbf{Re} \int \int cos(\nu_1 t_1)exp^{(-i\nu_2 t_2)} s(t_1, t_2)dt_1 dt_2$$

where :$$\nu_1$$ and :$$\nu_2$$ denote the discrete indirect double-quantum and single-quantum(detection) axes, respectively, in the 2D NMR experiments. Next, the \emph{covariance matrix} is calculated in the frequency domain according to the following equation
 * $$ C(\nu_2', \nu_2) = S^T S = \sum_{\nu^1}[S(\nu_1,\nu_2')S(\nu_1,\nu_2)],$$ with :$$\nu_2, \nu_2'$$ taking all possible single-quantum frequency values and with the summation carried out over all discrete, double quantum frequencies :$$\nu_1$$.

Example 2
Atomic Structure from 2D-FT STEM Images of electron distributions in a high-temperature cuprate superconductor `paracrystal' reveal both the domains (or `location') and the local symmetry of the 'pseudo-gap' in the electron-pair correlation band responsible for the high--temperature superconductivity effect (obtained at Cornell University). So far there have been three Nobel prizes awarded for 2D-FT NMR/MRI during 1992-2003, and an additional, earlier Nobel prize for 2D-FT of X-ray data (`CAT scans'); recently the advanced possibilities of 2D-FT techniques in Chemistry, Physiology and Medicine received very significant recognition.

Brief explanation of NMRI diagnostic uses in Pathology
As an example, a diseased tissue such as a malign tumor, can be detected by 2D-FT NMRI because the hydrogen nuclei of molecules in different tissues return to their equilibrium spin state at different relaxation rates, and also because of the manner in which a malign tumor spreads and grows rapidly along the blood vessels adjacent to the tumor, also inducing further vascularization to occur. By changing the pulse delays in the RF pulse sequence employed, and/or the RF pulse sequence itself, one may obtain a `relaxation--based contrast', or contrast enhancement between different types of body tissue, such as normal vs. diseased tissue cells for example. Excluded from such diagnostic observations by NMRI are all patients with ferromagnetic metal implants, (e.g., cochlear implants), and all cardiac pacemaker patients who cannot undergo any NMRI scan because of the very intense magnetic and RF fields employed in NMRI which would strongly interfere with the correct functioning of such pacemakers. It is, however, conceivable that future developments may also include along with the NMRI diagnostic treatments with special techniques involving applied magnetic fields and very high frequency RF. Already, surgery with special tools is being experimented on in the presence of NMR imaging of subjects.Thus, NMRI is used to image almost every part of the body, and is especially useful for diagnosis in neurological conditions, disorders of the muscles and joints, for evaluating tumors, such as in lung or skin cancers, abnormalities in the heart (especially in children with hereditary disorders), blood vessels, CAD, atherosclerosis and cardiac infarcts (courtesy of Dr. Robert R. Edelman)

Related Wikipedia websites

 * Medical imaging
 * Computed tomography
 * Magnetic resonance microscopy
 * Fourier transform spectroscopy
 * FT-NIRS
 * Magnetic resonance elastography


 * Nuclear magnetic resonance (NMR)
 * Relaxation
 * Robinson oscillator
 * Earth's field NMR (EFNMR)
 * Rabi cycle

This article incorporates material by the original author from 2D-FT MR- Imaging and related Nobel awards on PlanetPhysics, which is licensed under the GFDL.

Nuclear Overhauser effect
http://en.wikipedia.org/wiki/Nuclear_Overhauser_effect

Quantum automata
http://planetphysics.org/encyclopedia/QuantumComputers.html

Quantum computer
http://en.wikipedia.org/wiki/Quantum_computer

Quark is under peer review
I've nominated the Quark article for peer review. Since I see you have made many edits to that article, you might be interested in watching the review. Thank you. -- Army1987 (t — c) 09:28, 22 October 2008 (UTC)

Esquisse d' un Programme
Updated and improved

"Esquisse d'un Programme" is a famous proposal for long-term mathematical research made by the German-born, French mathematician Alexander Grothendieck . He pursued the sequence of logically linked ideas in his important project proposal from 1984 until 1988, but his proposed research continues to date to be of major interest in several branches of advanced mathematics. Grothendieck's vision provides inspiration today for several developments in mathematics such as the extension and generalization of Galois theory, which is currently being extended based on his original proposal.

Outline of paper's content
Submitted in 1984, the Esquisse d'un Programme was a successful proposal submitted by Alexander Grothendieck for a position at the Centre National de la Recherche Scientifique, which Grothendieck held from 1984 till 1988. This proposal was however not formally published until 1997, because the author "could not be found, much less his permission requested". The dessins d'enfants, or "children's drawings" and "anabelian algebraic geometry" — non-Abelian algebraic topology and noncommutative geometry — that are contained in this long-term program, continue even today to inspire extensive mathematical studies.

Abstract of the paper
("Sommaire")


 * 1. The Proposal and enterpise ("Envoi").
 * 2. "Teichmüller's Lego-game and the Galois group of Q over Q" ("Un jeu de “Lego-Teichmüller” et le groupe de Galois de Q sur Q").
 * 3. Number fields associated with "dessin d'enfants". ("Corps de nombres associés à un dessin d’enfant").
 * 4. Regular polyhedra over finite fields ("Polyèdres réguliers sur les corps finis").
 * 5. General topology or a 'Moderated Topology' ("Haro sur la topologie dite 'générale', et réflexions heuristiques vers une topologie dite 'modérée").
 * 6. Differentiable theories and moderated theories ("Théories différentiables" (à la Nash) et “théories modérées").
 * 7. Pursuing Stacks ("À la Poursuite des Champs"}.
 * 8. Two-dimensional geometry ("Digressions de géométrie bidimensionnelle".
 * 9. Summary of proposed studies ("Bilan d’une activité enseignante").
 * 10. Epilogue.
 * Notes

Suggested further reading for the interested mathematical reader is provided in the References section.

Extensions of Galois's theory for groups: Galois groupoids, categories and functors
Galois has developed a powerful, fundamental algebraic theory in mathematics that provides very efficient computations for certain algebraic problems by utilizing the algebraic concept of groups, which is now known as the theory of Galois groups; such computations were not possible before, and also in many cases are much more effective than the 'direct' calculations without using groups. To begin with, Alexander Grothendieck stated in his proposal: "Thus, the group of Galois is realized as the automorphism group of a concrete, pro-finite group which respects certain structures that are essential to this group." This fundamental, Galois group theory in mathematics has been considerably expanded, at first to groupoids- as proposed in Alexander Grothendieck's Esquisse d' un Programme (EdP)- and now already partially carried out for groupoids; the latter are now further developed beyond groupoids to categories by several groups of mathematicians. Here, we shall focus only on the well-established and fully validated extensions of Galois' theory. Thus, EdP also proposed and anticipated, along previous Alexander Grothendieck's IHÉS seminars (SGA1 to SGA4) held in the 1960s, the development of even more powerful extensions of the original Galois's theory for groups by utilizing categories, functors and natural transformations, as well as further expansion of the manifold of ideas presented in Alexander Grothendieck's Descent Theory. The notion of motive has also been pursued actively. This was developed into the motivic Galois group, Grothendieck topology and Grothendieck category . Such developments were recently extended in algebraic topology via representable functors and the fundamental groupoid functor.

The Long March acroos the Theory of Gallois
“This manuscript, consisting of some nearly 800 hand-written double pages, dating from 1981, was left behind with Grothendieck's other unpublished manuscripts when he disappeared in 1991. Typed in Tex, it comes out to about 400 pages. It goes together with a further 1,000 pages or so of additional notes and sections which have not yet been read or typed. Many of the major themes were summarised in the 1983 manuscript Esquisse d'un Programme.”

The Table of Contents for this important work by Alexander Grothendieck was originally compiled in French by the author and is reproduced here after the English Translation of the major parts of the Long March.

Table of Contents for the Long March across Galois Theory Multi-Galois Toposes (topoi) Applications to topos coverings Pro-multi-Galois variants Complements Introducing the arithmetic context; an `anabelian' (non-Abelian) fundamental conjecture Local analysis of for Reformulation of the conjecture (the necessary `purgatorium'...) A taxonomic reflexion Tangential structure at (sections of second type extensions) Adjusting the hypotheses Conditions on the Groupoid systems originating from geometric considerations (in the nonabelian case, the groupoid system can be expressed in terms of outer groups) Returning to the arithmetic case: the Galois-type formulation, p. 53 A cohomological digression, p.58 Returning to the topological case: critical orbits Returning to the concept of cyclic group Application to the finite subgroups of (the discrete case, para.18) Tour of Teichmüller (spaces) Digression: the description of 2-isotopic categories of algebraic curves p.116 21. Teichmüller spaces p.126 23. Returning to the surfaces of (finite) groups of operators (`formulating the equations' of the problem) “Special” Teichmüller Groups The case of “two groups of operators” 26. Profinite Teichmüller Groups, connection with the modular Teichmüller topos, conjecture 29. Critique of the previous approach 31. Digression: a finite group over a profinite cyclic group 32 Returning to the arithmetic aspects: a remarkable reconstruction of all of the étale topos of a complete algebraic curve starting from an open nonabelian space... 37.

Related works by Alexander Grothendieck

 * Alexander Grothendieck. 1971, Revêtements Étales et Groupe Fondamental (SGA1), chapter VI: Catégories fibrées et descente, Lecture Notes in Math. 224, Springer-Verlag: Berlin.
 * Alexander Grothendieck. 1957, Sur quelque point d-algébre homologique., Tohoku Math. J., 9: 119-121.
 * Alexander Grothendieck and Jean Dieudonné.: 1960, Éléments de géométrie algébrique., Publ. Inst. des Hautes Etudes de Science, (IHÉS), 4.
 * Alexander Grothendieck et al.,1971. Séminaire de Géométrie Algébrique du Bois-Marie, Vol. 1-7, Berlin: Springer-Verlag.


 * Alexander Grothendieck. 1962. Séminaires en Géométrie Algébrique du Bois-Marie, Vol. 2 - Cohomologie Locale des Faisceaux Cohèrents et Théorèmes de Lefschetz Locaux et Globaux., pp.287. (with an additional contributed exposé by Mme. Michele Raynaud). (Typewritten manuscript available in French; see also a brief summary in English References Cited:
 * Jean-Pierre Serre. 1964. Cohomologie Galoisienne, Springer-Verlag: Berlin.
 * J. L. Verdier. 1965. Algèbre homologiques et Catégories derivées. North Holland Publ. Cie).


 * Alexander Grothendieck. 1957, Sur Quelques Points d'algèbre homologique, Tohoku Mathematics Journal, 9, 119-221.


 * Alexander Grothendieck et al. Séminaires en Géometrie Algèbrique- 4, Tome 1, Exposé 1 (or the Appendix to Exposée 1, by `N. Bourbaki' for more detail and a large number of results. AG4 is freely available in French; also available is an extensive Abstract in English.


 * Alexander Grothendieck, 1984. "Esquisse d'un Programme", (1984 manuscript), finally published in "Geometric Galois Actions", L. Schneps, P. Lochak, eds., London Math. Soc. Lecture Notes 242,Cambridge University Press, 1997, pp.5-48; English transl., ibid., pp. 243-283. MR 99c:14034.


 * Alexander Grothendieck, "La longue marche in à travers la théorie de Galois." = "The Long March Towards/Across the Theory of Galois", 1981 manuscript, University of Montpellier preprint series 1996, edited by J. Malgoire.

Relating to the Esquisse

 * Leila Schneps. 1994. The Grothendieck Theory of Dessins d'Enfants. (London Mathematical Society Lecture Note Series), Cambridge University Press, 376 pp.






 * David Harbater and Leila Schneps. 2000. Fundamental groups of moduli and the Grothendieck-Teichmüller group, Trans. Amer. Math. Soc. 352 (2000), 3117-3148. MSC: Primary 11R32, 14E20, 14H10; Secondary 20F29, 20F34, 32G15.

Florentina Ioana Mosora
Florentina Ioana Mosora (b. 7 January 1940 in Cluj, Romania; d. 1994 in Liège, Belgium) was a Romanian biohysicist who graduated from the University of Bucharest in the School of Physics (Facultatea de Fizica). In her earlier years (before 1964) she was a well-known Romanian actress that played the main feminine character in at least four movies that were widely distributed in Romania,.

Film career
During the 1960s she was a well-known actress, and played romantic roles in four Romanian movies: Dragoste la zero grade ("Love at zero degrees," 1964), Sub cupola albastra ("Under the blue arch," 1962), Post-restant ("PO Box," 1961), and Baietii nostri ("Our boys," 1959).

Academic career
From 1965 to 1970 Florentina Ioana Mosora worked as a neurobiophysics assistant in Vasile Vasilescu's Laboratory of Medical Biophysics at the School of Medicine and Pharmacy in Bucharest (IMFB). In 1972, she became a lecturer at the School of Biology. Later in Belgium, she "pioneered the use of stable isotopes in medicine". , In 1989 she was one of three scientists co-organizing a NATO workshop on "Biomechanical Transport Processes". , and subsequently was elected to the The Royal Academies for Science and the Arts of Belgium (Flemish Academy).

Selected works

 * Marcel Lacroix 1, Florentina Mosora 1, Micheline Pontus 1, Pierre Lefebvre 2, Alfred Luyckx 2, and Gabriel Lopez-Habib 2. Glucose Naturally Labeled with Carbon-13: Use for Metabolic Studies in Man.Science 3 August 1973: Vol. 181. no. 4098, pp. 445--446. DOI:10.1126/science.181.4098.445.
 * Marcel Lacroix 1, Florentina Mosora 1, Micheline Pontus 1, Pierre Lefebvre 2, Alfred Luyckx 2, and Gabriel Lopez-Habib 2. Glucose Naturally Labeled with Carbon-13: Use for Metabolic Studies in Man.Science 3 August 1973: Vol. 181. no. 4098, pp. 445--446. DOI:10.1126/science.181.4098.445.
 * Florentina Mosora, "Experimental Studies of Variations of the State," in

Metamathematics
http://en.wikipedia.org/wiki/Metamathematics

Metamathematics is `mathematics used to study mathematics', or it involves the application of a philosophy of mathematics. The first part of this general description appears tautological, or is perhaps open to Bertrand Russell's and Alfred Whitehead's types of antimonies (e.g., "the of all sets is not a set"), as described in their famous "Principia Mathematica". An alternative, non-circular definition is as follows:

Metamathematics is the study of metatheories of standard theories in mathematics, or about mathematical--not `purely logical'-- theories. Thus, in Encyclopædia Britannica, metatheory is defined as a " ,MT, the subject matter of which is another theory, T . A finding proved in the former (MT) that deals with the latter (T) is known as a metatheorem " (cited from Metatheory-Encyclopædia Britannica Online). Thus, a major part of metamathematics deals with: metatheorems, that is " about theorems", meta-propositions about propositions, metatheories about mathematical proofs (that of course utilize logic, but also are based upon fundamental mathematics concepts), and so on.

Meta-mathematical metatheorems about mathematics itself were originally differentiated from ordinary mathematical theorems in the 19th century, to focus on what was then called the foundational crisis of mathematics. Richard's paradox concerning certain 'definitions' of real numbers in the English language is an example of the sort of contradictions which can easily occur if one fails to distinguish between mathematics and metamathematics. Bertrand Russell's and Alfred Whitehead's type of paradoxes is yet another important example of possible contradictions due to such failures in the 'old' set theory.

For example, one of the themes of metamathematics is the analysis of (and hence also discussions about) mathematical elements which are (necessarily) true (or false) in any mathematical system.

Many issues regarding the foundations of mathematics and the philosophy of mathematics touch on or use ideas from metamathematics. The working assumption of metamathematics is that mathematical content can be captured in a formal system, usually a first order theory or axiomatic set theory.

Metamathematics was intimately connected to mathematical logic, so that the histories of the two fields largely overlap.

(tag removed) merge from|metalogic|date=May 2010}}

Consider also that many sources define Metamathematics as the branch of mathematics (not logic!) that deals with "the logic and consistency of mathematical proofs, formulas, and equations". Thus, a major part of metamathemtics deals with metatheorems, that is "theorems about theorems", and meta-propositions about propositions, and so on. On the other hand, metalogic is defined in Wikipedia as: "the study of the metatheory of logic." (i.e., not mathematics). To sum up, whereas metamathematics is the study of metatheories in mathematics, Metalogic is the study of the metatheories of logic. Therefore, logically, unless one is willing to say that mathematics and logic are identical or even equivalent--which they are not-- metamathematics and metalogic are not identical, but substantially different. Moreover, as there is also mathematical logic, one has to conceed there is partial overlap between the two fields of mathematics and logic, but such an overlap is not up to either logical identity or mathematical equivalence. Many mathematical logicians would like to think, or define, mathematical logic as comprising the entire field of metamathematics, which it currently does not, but it may have been the case 50 years or a century ago.

Serious metamathematical reflection began with the work of Gottlob Frege, especially his Begriffsschrift. David Hilbert was the first to invoke the term "metamathematics" with regularity (see Hilbert's program). In his hands, it meant something akin to contemporary proof theory. Another important contemporary branch is model theory. Other leading figures in the field include Bertrand Russell, Thoralf Skolem, Emil Post, Alonzo Church, Stephen Kleene, Willard Quine, Paul Benacerraf, Hilary Putnam, Gregory Chaitin, and most important, Alfred Tarski and Kurt Gödel. In particular, Gödel's proof that, given any finite number of axioms for Peano arithmetic, there will be true statements about that arithmetic that cannot be proved from those axioms, a result known as the incompleteness theorem, is arguably the greatest achievement of metamathematics and the philosophy of mathematics to date.

The List of articles initiated and created by Bci2 on Wikipedia

 * American-Romanian Academy of Arts and Sciences
 * European CNRS Franco-Romanian Associated Laboratories
 * Higher dimensional algebra
 * Theodor V. Ionescu
 * R-algebroid
 * Double groupoid
 * Society for Mathematical Biology
 * Complex system biology
 * George Karreman
 * Nicolas Rashevsky
 * Molecular models of DNA
 * Florentina Mosora
 * Esquisse d'un Programme
 * VCD
 * Fraţii Buzeşti High School
 * 2D-FT NMRI and Spectroscopy

Other Major contributions to Wikipedia articles

 * George Emil Palade
 * Robert Rosen
 * Nicolae Popescu
 * List of important publications in biology
 * Systems biology
 * Mathematical and theoretical biology
 * Quark
 * Tannaka–Krein duality
 * Nuclear magnetic resonance
 * Category theory
 * Category theory
 * Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid
 * *FT-NIRS (FT-NIR) Spectroscopy

Metalogic
http://en.wikipedia.org/wiki/Metalogic

Metalogic is the study of the metatheory of logic. While logic is the study of the manner in which logical systems can be used to decide the correctness of arguments, metalogic studies the properties of the logical systems themselves. According to Geoffrey Hunter, while logic concerns itself with the "truths of logic," metalogic concerns itself with the theory of "sentences used to express truths of logic."

The basic objects of study in metalogic are formal languages, formal systems, and their interpretations. The study of interpretation of formal systems is the branch of mathematical logic known as model theory, while the study of deductive apparatus is the branch known as proof theory.

History
Metalogical questions have been asked since the time of Aristotle. However, it was only with the rise of formal languages in the late 19th and early 20th century that investigations into the foundations of logic began to flourish. In 1904, David Hilbert observed that in investigating the foundations of mathematics that logical notions are presupposed, and therefore a simultaneous account of metalogical and metamathematical principles was required. Today, metalogic and metamathematics are largely synonymous with each other, and both have been substantially subsumed by mathematical logic in academia.

Metalanguage-Object language
In metalogic, formal languages are sometimes called object languages. The language used to make statements about an object language is called a metalanguage. This distinction is a key difference between logic and metalogic. While logic deals with proofs in a formal system, expressed in some formal language, metalogic deals with proofs about a formal system which are expressed in a metalanguage about some object language.

Syntax-semantics
In metalogic, 'syntax' has to do with formal languages or formal systems without regard to any interpretation of them, whereas, 'semantics' has to do with interpretations of formal languages. The term 'syntactic' has a slightly wider scope than 'proof-theoretic', since it may be applied to properties of formal languages without any deductive systems, as well as to formal systems. 'Semantic' is synonymous with 'model-theoretic'.

Use-mention
In metalogic, the words 'use' and 'mention', in both their noun and verb forms, take on a technical sense in order to identify an important distinction. The use–mention distinction (sometimes referred to as the words-as-words distinction) is the distinction between using a word (or phrase) and mentioning it. Usually it is indicated that an expression is being mentioned rather than used by enclosing it in quotation marks, printing it in italics, or setting the expression by itself on a line. The enclosing in quotes of an expression gives us the name of an expression, for example:


 * 'Metalogic' is the name of this article.
 * This article is about metalogic.

Type-token
The type-token distinction is a distinction in metalogic, that separates an abstract concept from the objects which are particular instances of the concept. For example, the particular bicycle in your garage is a token of the type of thing known as "The bicycle." Whereas, the bicycle in your garage is in a particular place at a particular time, that is not true of "the bicycle" as used in the sentence: "The bicycle has become more popular recently." This distinction is used to clarify the meaning of symbols of formal languages.

Formal language
A formal language is an organized set of symbols the essential feature of which is that it can be precisely defined in terms of just the shapes and locations of those symbols. Such a language can be defined, then, without any reference to any meanings of any of its expressions; it can exist before any formal interpretation is assigned to it -- that is, before it has any meaning. First order logic is expressed in some formal language. A formal grammar determines which symbols and sets of symbols are formulas in a formal language.

A formal language can be defined formally as a set A of strings (finite sequences) on a fixed alphabet α. Some authors, including Carnap, define the language as the ordered pair <α, A>. Carnap also requires that each element of α must occur in at least one string in A.

Formal grammar
A formal grammar (also called formation rules) is a precise description of a the well-formed formulas of a formal language. It is synonymous with the set of strings over the alphabet of the formal language which constitute well formed formulas. However, it does not describe their semantics (i.e. what they mean).

Formal systems
A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules (also called inference rules) or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions.

A formal system can be formally defined as an ordered triple <α,$$\mathcal{I}$$,$$\mathcal{D}$$d>, where $$\mathcal{D}$$d is the relation of direct derivability. This relation is understood in a comprehensive sense such that the primitive sentences of the formal system are taken as directly derivable from the empty set of sentences. Direct derivability is a relation between a sentence and a finite, possibly empty set of sentences. Axioms are laid down in such a way that every first place member of $$\mathcal{D}$$d is a member of $$\mathcal{I}$$ and every second place member is a finite subset of $$\mathcal{I}$$.

It is also possible to define a formal system using only the relation $$\mathcal{D}$$d. In this way we can omit $$\mathcal{I}$$, and α in the definitions of interpreted formal language, and interpreted formal system. However, this method can be more difficult to understand and work with.

Formal proofs
A formal proof is a sequences of well-formed formulas of a formal language, the last one of which is a theorem of a formal system. The theorem is a syntactic consequence of all the well formed formulae preceding it in the proof. For a well formed formula to qualify as part of a proof, it must be the result of applying a rule of the deductive apparatus of some formal system to the previous well formed formulae in the proof sequence.

Formal interpretations
A formal interpretation of a formal system is the assignment of meanings, to the symbols, and truth-values to the sentences of the formal system. The study of formal interpretations is called Formal semantics. Giving an interpretation is synonymous with constructing a model.

Results in metalogic
Results in metalogic consist of such things as formal proofs demonstrating the consistency, completeness, and decidability of particular formal systems.

Major results in metalogic include:


 * Proof of the uncountability of the set of all subsets of the set of natural numbers (Cantor's theorem 1891)
 * Löwenheim-Skolem theorem (Leopold Löwenheim 1915 and Thoralf Skolem 1919)
 * Proof of the consistency of truth-functional propositional logic (Emil Post 1920)
 * Proof of the semantic completeness of truth-functional propositional logic (Paul Bernays 1918) ,(Emil Post 1920)
 * Proof of the syntactic completeness of truth-functional propositional logic (Emil Post 1920)
 * Proof of the decidability of truth-functional propositional logic (Emil Post 1920)
 * Proof of the consistency of first order monadic predicate logic (Leopold Löwenheim 1915)
 * Proof of the semantic completeness of first order monadic predicate logic (Leopold Löwenheim 1915)
 * Proof of the decidability of first order monadic predicate logic (Leopold Löwenheim 1915)
 * Proof of the semantic completeness of first order predicate logic (Gödel's completeness theorem 1930)
 * Proof of the consistency of first order predicate logic (David Hilbert and Wilhelm Ackermann 1928)
 * Proof of the semantic completeness of first order predicate logic (Kurt Gödel 1930)
 * Proof of the undecidability of first order predicate logic (Alonzo Church 1936)
 * Gödel's first incompleteness theorem 1931
 * Gödel's second incompleteness theorem 1931

Properties


DNA is a long polymer made from repeating units called nucleotides. The DNA chain is 22 to 26 Ångströms wide (2.2 to 2.6 nanometres), and one nucleotide unit is 3.3 Å (0.33 nm) long. Although each individual repeating unit is very small, DNA polymers can be very large molecules containing millions of nucleotides. For instance, the largest human chromosome, chromosome number 1, is approximately 220 million base pairs long.

In living organisms, DNA does not usually exist as a single molecule, but instead as a tightly associated pair of molecules. These two long strands entwine like vines, in the shape of a double helix. The nucleotide repeats contain both the segment of the backbone of the molecule, which holds the chain together, and a base, which interacts with the other DNA strand in the helix. In general, a base linked to a sugar is called a nucleoside and a base linked to a sugar and one or more phosphate groups is called a nucleotide. If multiple nucleotides are linked together, as in DNA, this polymer is called a polynucleotide.

The backbone of the DNA strand is made from alternating phosphate and sugar residues. The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These asymmetric bonds mean a strand of DNA has a direction. In a double helix the direction of the nucleotides in one strand is opposite to their direction in the other strand. This arrangement of DNA strands is called antiparallel. The asymmetric ends of DNA strands are referred to as the 5′ (five prime) and 3′ (three prime) ends, with the 5' end being that with a terminal phosphate group and the 3' end that with a terminal hydroxyl group. One of the major differences between DNA and RNA is the sugar, with 2-deoxyribose being replaced by the alternative pentose sugar ribose in RNA.

The DNA double helix is stabilized by hydrogen bonds between the bases attached to the two strands. The four bases found in DNA are adenine (abbreviated A), cytosine (C), guanine (G) and thymine (T). These four bases are attached to the sugar/phosphate to form the complete nucleotide, as shown for adenosine monophosphate.

These bases are classified into two types; adenine and guanine are fused five- and six-membered heterocyclic compounds called purines, while cytosine and thymine are six-membered rings called pyrimidines. A fifth pyrimidine base, called uracil (U), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. Uracil is not usually found in DNA, occurring only as a breakdown product of cytosine.

Grooves


Normally, the double helix is a right-handed spiral. As the DNA strands wind around each other, they leave gaps between each set of phosphate backbones, revealing the sides of the bases inside (see animation). There are two of these grooves twisting around the surface of the double helix: one groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form.

Base pairing
Each type of base on one strand forms a bond with just one type of base on the other strand. This is called complementary base pairing. Here, purines form hydrogen bonds to pyrimidines, with A bonding only to T, and C bonding only to G. This arrangement of two nucleotides binding together across the double helix is called a base pair. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can therefore be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. Indeed, this reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in living organisms. Top, a GC base pair with three hydrogen bonds. Bottom, an AT base pair with two hydrogen bonds. Non-covalent hydrogen bonds between the pairs are shown as dashed lines. The two types of base pairs form different numbers of hydrogen bonds, AT forming two hydrogen bonds, and GC forming three hydrogen bonds (see figures, left). DNA with high GC-content is more stable than DNA with low GC-content, but contrary to popular belief, this is not due to the extra hydrogen bond of a GC basepair but rather the contribution of stacking interactions (hydrogen bonding merely provides specificity of the pairing, not stability). As a result, it is both the percentage of GC base pairs and the overall length of a DNA double helix that determine the strength of the association between the two strands of DNA. Long DNA helices with a high GC content have stronger-interacting strands, while short helices with high AT content have weaker-interacting strands. In biology, parts of the DNA double helix that need to separate easily, such as the TATAAT Pribnow box in some promoters, tend to have a high AT content, making the strands easier to pull apart. In the laboratory, the strength of this interaction can be measured by finding the temperature required to break the hydrogen bonds, their melting temperature (also called Tm value). When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules have no single common shape, but some conformations are more stable than others.

Sense and antisense
A DNA sequence is called "sense" if its sequence is the same as that of a messenger RNA copy that is translated into protein. The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.

A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes. In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription, while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome.

Supercoiling
DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.

Alternate DNA structures
DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although, only B-DNA and Z-DNA have been directly observed in organisms. The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, as well as the presence of polyamines in solution.

The first published reports of A-DNA X-ray diffraction patterns-- and also B-DNA-- employed analyses based on Patterson transforms that provided only a limited amount of structural information for oriented fibers of DNA isolated from calf thymus. An alternate analysis was then proposed by Wilkins et al. in 1953 for B-DNA X-ray diffraction/scattering patterns of hydrated, bacterial oriented DNA fibers and trout sperm heads in terms of squares of Bessel functions. Although the `B-DNA form' is most common under the conditions found in cells, it is not a well-defined conformation but a family or fuzzy set of DNA-conformations that occur at the high hydration levels present in a wide variety of living cells. Their corresponding X-ray diffraction & scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder (>20%), and concomitantly the structure is not tractable using only the standard analysis.

On the other hand, the standard analysis, involving only Fourier transforms of Bessel functions and DNA molecular models, is still routinely employed for the analysis of A-DNA and Z-DNA X-ray diffraction patterns.

Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partially dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, as well as in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.



Quadruplex structures
At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.

These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases form a flat plate and these flat four-base units then stack on top of each other, to form a stable G-quadruplex structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.

In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.

Branched DNA
In DNA fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible.

Chemical modifications
Structure of cytosine with and without the 5-methyl group. After deamination the 5-methylcytosine has the same structure as thymine

Base modifications
The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. For example, cytosine methylation, produces 5-methylcytosine, which is important for X-chromosome inactivation. The average level of methylation varies between organisms - the worm Caenorhabditis elegans lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine. Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, methylated cytosines are therefore particularly prone to mutations. Other base modifications include adenine methylation in bacteria and the glycosylation of uracil to produce the "J-base" in kinetoplastids.

Damage


DNA can be damaged by many different sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases. On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks. A typical human cell contains about 150,000 bases that have suffered oxidative damage. Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions and deletions from the DNA sequence, as well as chromosomal translocations.

Many mutagens fit into the space between two adjacent base pairs, this is called intercalating. Most intercalators are aromatic and planar molecules, and include Ethidium bromide, daunomycin, and doxorubicin. In order for an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations. As a result, DNA intercalators are often carcinogens, and Benzo[a]pyrene diol epoxide, acridines, aflatoxin and ethidium bromide are well-known examples. Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells.

Biological functions
DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes. The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation which depends on the same interaction between RNA nucleotides. Alternatively, a cell may simply copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here we focus on the interactions between DNA and other molecules that mediate the function of the genome.

Genes and genomes
Genomic DNA is located in the cell nucleus of eukaryotes, as well as small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid. The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, as well as regulatory sequences such as promoters and enhancers, which control the transcription of the open reading frame.

In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences. The reasons for the presence of so much non-coding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species represent a long-standing puzzle known as the "C-value enigma." However, DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression. Some non-coding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes, but are important for the function and stability of chromosomes. An abundant form of non-coding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation. These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence.

Transcription and translation
A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT).

In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons ($$4^3$$ combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAA, TGA and TAG codons.



Replication
Cell division is essential for an organism to grow, but when a cell divides it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing, and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5′ to 3′ direction, different mechanisms are used to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA.

Interactions with proteins
All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.

DNA-binding proteins
Interaction of DNA with histones (shown in white, top). These proteins' basic amino acids (below left, blue) bind to the acidic phosphate groups on DNA (below right, red).

Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are therefore largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes.

A distinct group of DNA-binding proteins are the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases.

In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter; this will change the accessibility of the DNA template to the polymerase.

As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes. Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible.



Nucleases and ligases
Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5′-GAT|ATC-3′ and makes a cut at the vertical line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.

Enzymes called DNA ligases can rejoin cut or broken DNA strands. Ligases are particularly important in lagging strand DNA replication, as they join together the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.

Topoisomerases and helicases
Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzyme work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break. Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.

Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly ATP, to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases.

Polymerases
Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products are copies of existing polynucleotide chains - which are called templates. These enzymes function by adding nucleotides onto the 3′ hydroxyl group of the previous nucleotide in a DNA strand. Consequently, all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use.

In DNA replication, a DNA-dependent DNA polymerase makes a copy of a DNA sequence. Accuracy is vital in this process, so many of these polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base removed. In most organisms DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases.

RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure.

Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.

Genetic recombination
Structure of the Holliday junction intermediate in genetic recombination. The four separate DNA strands are coloured red, blue, green and yellow.



A DNA helix usually does not interact with other segments of DNA, and in human cells the different chromosomes even occupy separate areas in the nucleus called "chromosome territories". This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is during chromosomal crossover when they recombine. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin.

Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins. Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks.

The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51. The first step in recombination is a double-stranded break either caused by an endonuclease or damage to the DNA. A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA.

Evolution
DNA contains the genetic information that allows all modern living things to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material. RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes. This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur since the number of unique bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes.

Unfortunately, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible. This is because DNA will survive in the environment for less than one million years and slowly degrades into short fragments in solution. Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250-million years old, but these claims are controversial.

Genetic engineering
Methods have been developed to purify DNA from organisms, such as phenol-chloroform extraction and manipulate it in the laboratory, such as restriction digests and the polymerase chain reaction. Modern biology and biochemistry make intensive use of these techniques in recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector. The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research, or be grown in agriculture.

Forensics
Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is called genetic fingerprinting, or more accurately, DNA profiling. In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA. However, identification can be complicated if the scene is contaminated with DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case.

People convicted of certain types of crimes may be required to provide a sample of DNA for a database. This has helped investigators solve old cases where only a DNA sample was obtained from the scene. DNA profiling can also be used to identify victims of mass casualty incidents. On the other hand, many convicted people have been released from prison on the basis of DNA techniques, which were not available when a crime had originally been committed.

Bioinformatics
Bioinformatics involves the manipulation, searching, and data mining of DNA sequence data. The development of techniques to store and search DNA sequences have led to widely applied advances in computer science, especially string searching algorithms, machine learning and database theory. String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides. In other applications such as text editors, even simple algorithms for this problem usually suffice, but DNA sequences cause these algorithms to exhibit near-worst-case behaviour due to their small number of distinct characters. The related problem of sequence alignment aims to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without annotations, which label the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products in an organism even before they have been isolated experimentally.

DNA nanotechnology


DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based as well as using the "DNA origami" method) as well as three-dimensional structures in the shapes of polyhedra. Nanomechanical devices and algorithmic self-assembly have also been demonstrated, and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins.

History and anthropology
Because DNA collects mutations over time, which are then inherited, it contains historical information and by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny. This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology; for example, DNA evidence is being used to try to identify the Ten Lost Tribes of Israel.

DNA has also been used to look at modern family relationships, such as establishing family relationships between the descendants of Sally Hemings and Thomas Jefferson. This usage is closely related to the use of DNA in criminal investigations detailed above. Indeed, some criminal investigations have been solved when DNA from crime scenes has matched relatives of the guilty individual.

History of DNA research


DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein". In 1919, Phoebus Levene identified the base, sugar and phosphate nucleotide unit. Levene suggested that DNA consisted of a string of nucleotide units linked together through the phosphate groups. However, Levene thought the chain was short and the bases repeated in a fixed order. In 1937 William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure.

In 1928, Frederick Griffith discovered that traits of the "smooth" form of the Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. This system provided the first clear suggestion that DNA carried genetic information&mdash;the Avery-MacLeod-McCarty experiment&mdash;when Oswald Avery, along with coworkers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle in 1943. DNA's role in heredity was confirmed in 1952, when Alfred Hershey and Martha Chase in the Hershey-Chase experiment showed that DNA is the genetic material of the T2 phage.

In 1953, based on X-ray diffraction images taken by Rosalind Franklin and the information that the bases were paired, James D. Watson and Francis Crick suggested what is now accepted as the first accurate model of DNA structure in the journal Nature. Experimental evidence for Watson and Crick's model were published in a series of five articles in the same issue of Nature. Of these, Franklin and Raymond Gosling's paper was the first publication of X-ray diffraction data that supported the Watson and Crick model; this issue also contained an article on DNA structure by Maurice Wilkins and two of his colleagues. In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine Unfortunately, Nobel rules of the time allowed only living recipients, but a vigorous debate continues on who should receive credit for the discovery.

In an influential presentation in 1957, Crick laid out the "Central Dogma" of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis". Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the Meselson-Stahl experiment. Further work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley and Marshall Warren Nirenberg to decipher the genetic code. These findings represent the birth of molecular biology.

PlanetMath.org website contents
PlanetMath content is licensed under the ‘copyleft’ GNU Free Documentation License. An author who starts a new article becomes the owner of that article; he or she may then choose to grant editing rights to other individuals or groups. All content is written in LaTeX, a typesetting system popular among mathematicians because of its support of the technical needs of mathematical typesetting and its high-quality output. The user can explicitly create links to other articles, and the system also automatically turns certain words into links to the defining articles. The topic area of every article is classified by the Mathematics Subject Classification (MSC) of the American Mathematical Society (AMS).

Users may attach addenda, errata and discussions to articles. A system for private messaging among users is also in place.

As of April 2009, the encyclopedia hosted about 8470 entries and over 14000 concepts (a concept may be e.g. a specific notion defined within a more general entry). About 600 Wikipedia pages incorporate text from PlanetMath and/or PlanetPhysics.org articles

The software running PlanetMath is written in Perl and runs on Linux and the web server Apache. It is known as Noösphere and has been released under the free BSD License.

A PM book of 2,300 pages in PDF format is now available for the PM Free Encyclopedia contents up to 2006 as a free download PDF file of 18Mb. Additional, applied mathematics content and over 2 Gb of bibliographic materials, as well as Encyclopedia entries (currently at >40Mb) related to mathematics applications and Theoretical Physics can be also freely downloaded from the related, but independent, Noosphere-based [http://planetphysics.org/ PlanetPhysics. org] website.

DNA Molecular Models
Molecular models of DNA structures are representations of the molecular geometry and topology of Deoxyribonucleic acid (DNA) molecules using one of several means, such as: closely packed spheres (CPK models) made of plastic, metal wires for 'skeletal models', graphic computations and animations by computers, artistic rendering, and so on, with the aim of simplifying and presenting the essential, physical and chemical, properties of DNA molecular structures either in vivo or in vitro. Computer molecular models also allow animations and molecular dynamics simulations that are very important for understanding how DNA functions in vivo. Thus, an old standing dynamic problem is how DNA "self-replication" takes place in living cells that should involve transient uncoiling of supercoiled DNA fibers. Altough DNA consists of relatively rigid, very large elongated biopolymer molecules called "fibers" or chains (that are made of repeating nucleotide units of four basic types, attached to deoxyribose and phospate groups), its molecular stucture in vivo undergoes dynamic configuration changes that involve dynamically attached water molecules and ions. Supercoiling, packing with histones in chromosome structures, and other such supramolecular aspects also involve in vivo DNA topology which is even more complex than DNA molecular geometry, thus turning molecular modeling of DNA into an especially challenging problem for both molecular biologists and biotechnologists. Like other large molecules and biopolymers, DNA often exists in multiple stable geometries (that is, it exhibits conformational isomerism) and configurational, quantum states which are close to each other in energy on the potential energy surface of the DNA molecule. Such geometries can also be computed, at least in principle, by employing ab initio quantum chemistry methods that have high accuracy for small molecules. Such quantum geometries define an important class of ab initio molecular models of DNA whose exploration has barely started.

In an interesting twist of roles, the DNA molecule itself was proposed to be utilized for quantum computing. Both DNA nanostructures as well as DNA 'computing' biochips have been built (see biochip image at right).

The more advanced, computer-based molecular models of DNA involve molecular dynamics simulations as well as quantum mechanical computations of vibro-rotations, delocalized molecular orbitals (MOs), electric dipole moments, hydrogen-bonding, and so on.

Importance
From the very early stages of structural studies of DNA by X-ray diffraction and biochemical means, molecular models such as the Watson-Crick double-helix model were succesfully employed to solve the 'puzzle' of DNA structure, and also find how the latter relates to its key functions in living cells. The first high quality X-ray diffraction patterns of A-DNA were reported by Rosalind Franklin and Raymond Gosling in 1953. The first calculations of the Fourier transform of an atomic helix were reported one year earlier by Cochran, Crick and Vand, and were followed in 1953 by the computation of the Fourier transform of a coiled-coil by Crick. The first reports of a double-helix molecular model of B-DNA structure were made by Watson and Crick in 1953. Last-but-not-least, Maurice F. Wilkins, A. Stokes and H.R. Wilson, reported the first X-ray patterns of in vivo B-DNA in partially oriented salmon sperm heads. The development of the first correct double-helix molecular model of DNA by Crick and Watson may not have been possible without the biochemical evidence for the nucleotide base-pairing ([A---T]; [C---G]), or Chargaff's rules.

Examples of DNA molecular models
Animated molecular models allow one to visually explore the three-dimensional (3D) structure of DNA. The first DNA model is a space-filling, or CPK, model of the DNA double-helix whereas the third is an animated wire, or skeletal type, molecular model of DNA. The last two DNA molecular models in this series depict quadruplex DNA that may be involved in certain cancers. The last figure on this panel is a molecular model of hydrogen bonds between water molecules in ice that are similar to those found in DNA.


 * Spacefilling models or CPK models - a molecule is represented by overlapping spheres representing the atoms:

|- | |}

Images for DNA Structure Determination from X-Ray Patterns
The following images illustrate both the principles and the main steps involved in generating structural information from X-ray diffraction studies of oriented DNA fibers with the help of molecular models of DNA that are combined with crystallographic and mathematical analysis of the X-ray patterns. From left to right the gallery of images shows:
 * First row:
 * 1. Constructive X-ray interference, or diffraction, following Bragg's Law of X-ray "reflection by the crystal planes";
 * 2. A comparison of A-DNA (crystalline) and highly hydrated B-DNA (paracrystalline) X-ray diffraction, and respectively, X-ray scattering patterns (courtesy of Dr. Herbert R. Wilson, FRS- see refs. list);
 * 3. Purified DNA precipitated in a water jug;
 * 4. The major steps involved in DNA structure determination by X-ray crystallography showing the important role played by molecular models of DNA structure in this iterative, structure--determination process;
 * Second row:
 * 5. Photo of a modern X-ray diffractometer employed for recording X-ray patterns of DNA with major components: X-ray source, goniometer, sample holder, X-ray detector and/or plate holder;
 * 6. Illustrated animation of an X-ray goniometer;
 * 7. X-ray detector at the SLAC synchrotron facility;
 * 8. Neutron scattering facility at ISIS in UK;
 * Third and fourth rows: Molecular models of DNA structure at various scales; figure #11 is an actual electron micrograph of a DNA fiber bundle, presumably of a single bacterial chromosome loop.

Paracrystalline lattice models of B-DNA structures
A paracrystalline lattice, or paracrystal, is a molecular or atomic lattice with significant amounts (e.g., larger than a few percent) of partial disordering of molecular arranegements. Limiting cases of the paracrystal model are nanostructures, such as glasses, liquids, etc., that may possess only local ordering and no global order. Liquid crystals also have paracrystalline rather than crystalline structures.

thumb|500px|center|DNA Helix controversy in 1952

Highly hydrated B-DNA occurs naturally in living cells in such a paracrystalline state, which is a dynamic one in spite of the relatively rigid DNA double-helix stabilized by parallel hydrogen bonds between the nucleotide base-pairs in the two complementary, helical DNA chains (see figures). For simplicity most DNA molecular models ommit both water and ions dynamically bound to B-DNA, and are thus less useful for understanding the dynamic behaviors of B-DNA in vivo. The physical and mathematical analysis of X-ray and spectroscopic data for paracrystalline B-DNA is therefore much more complicated than that of crystalline, A-DNA X-ray diffraction patterns. The paracrystal model is also important for DNA technological applications such as DNA nanotechnology. Novel techniques that combine X-ray diffraction of DNA with X-ray microscopy in hydrated living cells are now also being developed (see, for example, "Application of X-ray microscopy in the analysis of living hydrated cells").

Genomic and Biotechnology Applications of DNA molecular modeling
The following gallery of images illustrates various uses of DNA molecular modeling in Genomics and Biotechnology research applications from DNA repair to PCR and DNA nanostructures; each slide contains its own explanation and/or details. The first slide presents an overview of DNA applications, including DNA molecular models, with emphasis on Genomics and Biotechnology.

X-ray diffraction

 * NDB ID: UD0017 Database
 * X-ray Atlas -database
 * PDB files of coordinates for nucleic acid structures from X-ray diffraction by NA (incl. DNA) crystals
 * Structure factors dowloadable files in CIF format

Neutron scattering

 * ISIS neutron source
 * ISIS pulsed neutron source:A world centre for science with neutrons & muons at Harwell, near Oxford, UK.

X-ray microscopy

 * Application of X-ray microscopy in the analysis of living hydrated cells

Electron microscopy

 * DNA under electron microscope

Atomic Force Microscopy (AFM)
Two-dimensional DNA junction arrays have been visualized by Atomic Force Microscopy (AFM). Other imaging resources for AFM/Scanning probe microscopy(SPM) can be freely accessed at:
 * How SPM Works
 * SPM Image Gallery - AFM STM SEM MFM NSOM and more.

Spectroscopy

 * Vibrational circular dichroism (VCD)
 * FT-NMR
 * NMR Atlas--database
 * mmcif downloadable coordinate files of nucleic acids in solution from 2D-FT NMR data
 * NMR constraints files for NAs in PDB format
 * NMR microscopy
 * Microwave spectroscopy
 * FT-IR
 * FT-NIR
 * Spectral, Hyperspectral, and Chemical imaging).
 * Raman spectroscopy/microscopy and CARS.
 * Fluorescence correlation spectroscopy      , Fluorescence cross-correlation spectroscopy and FRET.
 * Confocal microscopy

Genomic and structural databases

 * CBS Genome Atlas Database &mdash; contains examples of base skews.
 * The Z curve database of genomes &mdash; a 3-dimensional visualization and analysis tool of genomes.
 * DNA and other nucleic acids' molecular models: Coordinate files of nucleic acids molecular structure models in PDB and CIF formats

Projects

 * WP-Math


 * WP:WikiProject Physics


 * WPBiography

Complex Systems Biology and Complexity Science
The Complexity science field is the new science field about complexity, which treats complex systems as a field of science. This scientific field studies the common properties of systems that are considered fundamentally complex. Such systems may exist in nature, society, science and other many fields. It is also called complex systems theory, complexity science, study of complex systems, sciences of complexity, non-equilibrium physics, and historical physics. The key problems of such systems are difficulties with their formal modeling and simulation. From such perspective, in different research contexts complex systems are defined on the base of their different attributes. At present, the consensus related to one universal definition of complex system does not exist yet.

Overview
The study of complex systems is bringing a new approach to the many scientific questions that are a poor fit for the usual mechanistic view of reality present in science. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines including anthropology, artificial life, chemistry, computer science, economics, evolutionary computation, earthquake prediction, meteorology, molecular biology, neuroscience, physics, psychology and sociology.

In these endeavors, scientists often seek simple non-linear coupling rules which lead to complex phenomena (rather than describe - see above), but this need not be the case. Human societies (and probably human brains) are complex systems in which neither the components nor the couplings are simple. Nevertheless, they exhibit many of the hallmarks of complex systems. It is worth remarking that non-linearity is not a necessary feature of complex systems modeling: macro-analyses that concern unstable equilibrium and evolution processes of certain biological/social/economic systems can usefully be carried out also by sets of linear equations, which do nevertheless entail reciprocal dependence between variable parameters.

Traditionally, engineering has striven to keep its systems linear, because that makes them simpler to build and to predict. However, many physical systems (for example lasers) are inherently "complex systems" in terms of the definition above, and engineering practice must now include elements of complex systems research.

Information theory applies well to the complex adaptive systems, CAS, through the concepts of object oriented design, as well as through formalized concepts of organization and disorder that can be associated with any systems evolution process.

History
Complex Systems is a new approach to science that studies how relationships between parts give rise to the collective behaviors of a system and how the system interacts and forms relationships with its environment.

The earliest precursor to modern complex systems theory can be found in the classical political economy of the Scottish Enlightenment, later developed by the Austrian school of economics, which says that order in market systems is spontaneous (or emergent) in that it is the result of human action, but not the execution of any human design.

Upon this the Austrian school developed from the 19th to the early 20th century the economic calculation problem, along with the concept of dispersed knowledge, which were to fuel debates against the then-dominant Keynesian economics. This debate would notably lead economists, politicians and other parties to explore the question of computational complexity.

A pioneer in the field, and inspired by Karl Popper's and Warren Weaver's works, Nobel prize economist and philosopher Friedrich Hayek dedicated much of his work, from early to the late 20th century, to the study of complex phenomena, not constraining his work to human economies but to other fields such as psychology, biology and cybernetics.

Further Steven Strogatz from Sync stated that "every decade or so, a grandiose theory comes along, bearing similar aspirations and often brandishing an ominous-sounding C-name. In the 1960s it was cybernetics. In the '70s it was catastrophe theory. Then came chaos theory in the '80s and complexity theory in the '90s."

Complexity and modeling
One of Hayek's main contributions to early complexity theory is his distinction between the human capacity to predict the behaviour of simple systems and its capacity to predict the behaviour of complex systems through modeling. He believed that economics and the sciences of complex phenomena in general, which in his view included biology, psychology, and so on, could not be modeled after the sciences that deal with essentially simple phenomena like physics. Hayek would notably explain that complex phenomena, through modeling, can only allow pattern predictions, compared with the precise predictions that can be made out of non-complex phenomena.

Complexity and chaos theory
Complexity theory is rooted in Chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. The point is that chaos remains deterministic. With perfect knowledge of the initial conditions and of the context of an action, the course of this action can be predicted in chaos theory. As argued by Ilya Prigogine, Complexity is non-deterministic, and gives no way whatsoever to predict the future. The emergence of complexity theory shows a domain between deterministic order and randomness which is complex. This is referred as the 'edge of chaos'.

When one analyses complex systems, sensitivity to initial conditions, for example, is not an issue as important as within the chaos theory in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic set of relationships can generate some simple behavioural patterns, whereas chaotic behaviour, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions.

Therefore, the main difference between Chaotic systems and complex systems is their history. Chaotic systems don’t rely on their history as complex ones do. Chaotic behaviour pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events. In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite time periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations.

Research centers, conferences, and journals
Institutes and research centers


 * New England Complex Systems Institute
 * Santa Fe Institute
 * Center for Social Dynamics & Complexity (CSDC) at Arizona State University
 * Southampton Institute for Complex Systems Simulation
 * Center for the Study of Complex Systems at the University of Michigan
 * Center for Complex Systems and Brain Sciences at Florida Atlantic University

Journals
 * Complex Systems journal
 * Interdisciplinary Description of Complex Systems journal

Page categories
* Bci2

testnu mediawiki
test page with dna image