User talk:Choigan

Copyright violation in LARIAT
Hello, and welcome to Wikipedia. We appreciate your contributions to the article LARIAT, but for legal reasons, we cannot accept copyrighted text or images borrowed from other web sites or printed material, and as a consequence, your addition was deleted under section G12 of the criteria for speedy deletion.

You may use external websites as a source of information, but not as a source of sentences. This part is crucial: say it in your own words.

If the external website belongs to you, and you want to allow Wikipedia to use the text—which means allowing other people to modify it—then you must include on the external site the statement "I, (name), am the author of this article, (article name), and I release its content under the terms of the GNU Free Documentation License, Version 1.2 and later." You may also e-mail or mail the Foundation to release the content. See Donating copyrighted materials for more.

While we appreciate contributions, we must require all contributors to understand and comply with our copyright policy. Wikipedia takes copyright concerns very seriously, and persistent violators will be blocked from editing.

You might want to look at Wikipedia's policies and guidelines for more details, or ask a question here. You can also leave a message on my talk page. - CobaltBlueTony™ talk 15:19, 24 June 2009 (UTC)

Proposal for Advanced entropy based machine translation using Adaptive Neural Networks by Ivan P Gan
Proposal for Advanced entropy based machine translation using Adaptive Neural Networks  by Ivan P Gan

This document is aimed at a technology audience & as such assumes a high level of knowledge with the reader of software design, web design, translation memory systems, RDBMS, networks, communication protocols, server systems, i18n, l10n, g11n, character encoding methods & practices along with some knowledge of adaptive neural networks This document should be read in conjunction with the Language support systems development document by Ivan P. Gan

NLSO/LSF means Neutral Language Support Objects / Language Support Framework Language Neural defines a web site or software designed for generic Language support having no inherent dependency on any particular language and having a support framework to allow language switching at run time without re-compilation or modification of the software itself The UTF-8 standard describes a method of computer encoding and storage of language dependent character information which is capable of extending well beyond the Latin alphabet UTF-16, UTF-32, UCS-2 are also character encoding standards In this context the supported character sets are UTF-8 for web design applications, UTF-8 & UTF-16 for applications developed with the Lazarus Compiler Neural networks come in many forms Analog, using op-amps, resistors etc. Soft Digital, using various techniques from simple discriminators to fully adaptive complex networks implemented in software Firm Digital, implemented using programmable logic arrays Hard Digital, implemented using application specific chips designed to create digital neural networks Most implementations run as isolated systems of very limited capacity on a single machine such as those in Optical Character Recognition and do not learn from each other or use centralized systems to store neuron maps Some experimental systems have been run on networked computer though at this time there appear to be no large scale project This fact limits the scope of these systems dramatically much in the way that isolated computers are limited by lack of access to data resources available via the net

Global Adaptive Neural Entropy Translation Systems (GANETS) There are many research groups working worldwide on the task of machine translation, these legacy efforts ignore the message entropy and work on language pairs These systems therefore are inherently inaccurate and require complex, unwieldy rules structures which are bound to language pairs, not universal My proposal to use entropy based translation is a completely different approach to this rules based system, moving instead to an understanding based knowledge system I propose using a centralized data structure built on top of NLSO via ComChatter.com servers so there's a common reference point for neural maps and resource data The other major shift from traditional systems is designing the neural systems to run locally and over LAN/WAN links to form a single unified neural entity thus taking advantage of the vast computing power and human guidance for  unprecedented learning & translating capability

There are several stages to achieving the result as follows

Language identification Language identification is undertaken using various techniques such as character  set recognition & spell checks

Place Name (address) recognition, transliteration & translation Place names are easily recognized by humans, not so with machines, a word is just letters devoid of meaning, our GEO database makes it possible to recognize place names, therefore conveying meaning to the translation engine Many names are not translated, instead they are transliterated to the target writing system

Name, Trademark, Product, Service & Company name recognition & transliteration As with place names, recognition personal names, trademarks, products, services & company names carries meaning which is unavailable to legacy translation methods, normally these will be transliterated between writing systems, if not carried through as-is Personal names are matched with probably statistics to differentiate between family names, first names & gender Though it's possible to list names with racial, religious & national probability, this is avoided to circumvent contentious & divisive use of data, in any event it does not constructively contribute to the translation task These names will normally be transliterated between writing systems as opposed to translated

Currency, Time/Date, Phone numbers, Url's, Email addresses & Postcodes Relatively simple tests will recognized these in their respective classes with a high level of confidence, they can be verified using a series of simple tests, DNS lookup on a Url, DNS MX record to verify an email, postcode matching, etc. When looking at the recognition of Place names, Real names, Trademarks, etc. it rapidly becomes apparent how essential in the data systems are to rapid recognition of the elements within the text

Describing words This section is partially repeated from the Language support systems development document by Ivan P. Gan for convenience The reason this section is so important is the meaning it gives to each word Consider Marmalade, Apple, Bicycle & Rock, what do these words mean? What would they mean if I spoke to you for the first time or if you had no knowledge of English? Exactly nothing, devoid of purpose & meaning & it is this which is the defining difference with my research, we give the words meaning by classifying & describing them The word apple  we associate with fruit, food, apple tree, living natural organism, round approx 4 to 8 cm diameter, therefore we understand what it refers to The same with Rock etc. and in this is the entropy we need to effectively translate Car=Voiture, or does it? Voiture actually means vehicle & car is a specific type of vehicle so there's an inconsistency in the entropy conveyed during translation and a resultant data loss! Consider the following phrases, “I want a blue car”, “I will have a blue car”, “Give me the blue car”, “The blue car is the one I want” The entropy for all four phrases is “I require blue car” More complex variations:”I am hungry”, “I want food”, “I want to eat”, “Feed me”, “Give me food”, “Some food please”, “I want dinner”, “I would like a meal” The entropy for each variation is essentially “I am hungry” From this contrived example one can see the scale of the problem and why humans so far have beaten machines in translation, one also sees the importance of entropy as a means of translation The more accurate the entropy to begin with, the more accurate the resultant translation, this can only be learned over time through machine experience Traditional approaches operate in isolation, the use of centralized database's via networks combines the learning experience of all systems

There are numerous entries for each word in our database, namely Word Usage % of overall usage Rank Oxford style descriptive entry Say-As guide for speech synthesis systems Gender, Neutral, Masculine, Feminine Voice recording of both male & female native speakers of the language Related words Singular/Plural terms Grouping, Greetings, Business, Social, etc. Classification, some words can have more than one classification Common confusables & context variable words Context probability entries (helps identify cardiac as belonging to a medical context, titanium as metallurgy:20, engineering:70) The numeric weight value associated with each context helps identify the overall context of a document, weights are adjusted as part of the learning process both on a local and on a global basis

The classification of words can include Entity o	Physical, Cat, Rock, Water, etc o	Conceptual, Program, Song, Light, Radiation, Energy, Ghost Descriptive, Blue, Hard, Soft, Wet, Noisy, Hot Time/Chronology, First, Last, Now, Later, Future, Present, Past Action, Run, Fly, Walk, Speak, Drive, Dance, Add, Subtract, Divide Quantities, Number, More, Less, Higher, Lots,Few, Hotter Logical & Conditional, True, False, If, Else, And, Not, Neither, Nor, Any, Either Emotional, Love, Hate, Fear, Like, Loathe Location (this is also a real name and a physical entity) Real names/Titles (can also be a location, physical entity or conceptual entity), also defines Male/Female, First/Family name probability fields Acronyms & abbreviations (can refer to any other classification) Singular/Plural/Multiple

Entropy tables & the NLSO database The phrase bank we already have will in due course be scanned for entropy & the results entered in an entropy table with an entropy reference entry placed in reference to the mnemonics table This approach allows multiple phrases to reference a single entropy, it also allows better cross matching of phrases supplied via human translators

The Neural Translator The translator is a learning, self aligning network & during the initial phrases we guide that learning to create the initial rules & weights Once basic learning is achieved we feed the system our entire English language NLSO database to re-enforce rules & weight values Human correction & training will be used to correct entropy errors, no attempt at this stage will be made to do any translation Re-scan to improve entropy quality and flag differences against the first pass, with human interaction to guide the network, multiple passes may be made After the initial results are verified we task the computer with learning grammars, & entropy extraction  from other languages using the translations already in our database, entropy conflicts are again highlighted for correction After successful training of the initial network we begin live trails

The 1st phase of any translation is identifying the input text, this verifies the language, character set, spelling & grammar of the source data The 2nd phase in parsing the data is to associate the words with their respective definition entries, identifying Names, Places, Dates, etc. and making them for transliteration or re-structuring/re-formatting where appropriate During 2nd phase scanning, context probability data is calculated to assist the translator in creating the correct entropy & gender relationships Phase 2 scans a substantial portion of a large document or an entire small document to gain a better context picture prior to engaging the 3rd phase Phase 3 is what I call SOP. Subject, Object, Predicate, these can be nested in complex sentences! During phase 4 neural networks then attempt Boolean minimization & entropy extraction to get an accurate image of the message meaning The target language translator looks for entropy matches against the known database & uses the reference entry associated if found If no entry exists for the target language the output stage then creates a phrase based on the input & context with respect to gender relationships if defined during input scanning, or in the case of emails & instant messages if the recipient's gender is known, it overrides the  calculated gender After translation the output is fed back to the system using phases 2, 3 & 4 to extract the entropy & flag errors, it is this phase which improves the learning and accuracy of the overall system Phase 1 is redundant for this purpose as the spelling & grammar output will be derived from the same rules as Phase 1 analysis anyhow

Large scale deployment will begin with inclusion of GANETS within ChatterBox application, as a server extension for PHP & NLSO servers and as a module for software developers The entire system, prior to development will be modified to use all connected clients as part of an extended network so that the learning capability vastly exceeds any individual systems Language specific neural maps will be fed back to the central server in much the same way the NLSO system is designed to operate, thus providing initialization data on demand for clients as required (Choigan (talk) 21:58, 29 June 2009 (UTC))

Proposal for Advanced entropy based machine translation using Adaptive Neural Networks by Ivan P Gan
I, Ivan P. GAn, am the author of this article, Proposal for Advanced entropy based machine translation using Adaptive Neural Networks, and I release its content under the terms of the GNU Free Documentation Licens

Proposal for Advanced entropy based machine translation using Adaptive Neural Networks  by Ivan P Gan

This document is aimed at a technology audience & as such assumes a high level of knowledge with the reader of software design, web design, translation memory systems, RDBMS, networks, communication protocols, server systems, i18n, l10n, g11n, character encoding methods & practices along with some knowledge of adaptive neural networks This document should be read in conjunction with the Language support systems development document by Ivan P. Gan

NLSO/LSF means Neutral Language Support Objects / Language Support Framework Language Neural defines a web site or software designed for generic Language support having no inherent dependency on any particular language and having a support framework to allow language switching at run time without re-compilation or modification of the software itself The UTF-8 standard describes a method of computer encoding and storage of language dependent character information which is capable of extending well beyond the Latin alphabet UTF-16, UTF-32, UCS-2 are also character encoding standards In this context the supported character sets are UTF-8 for web design applications, UTF-8 & UTF-16 for applications developed with the Lazarus Compiler Neural networks come in many forms Analog, using op-amps, resistors etc. Soft Digital, using various techniques from simple discriminators to fully adaptive complex networks implemented in software Firm Digital, implemented using programmable logic arrays Hard Digital, implemented using application specific chips designed to create digital neural networks Most implementations run as isolated systems of very limited capacity on a single machine such as those in Optical Character Recognition and do not learn from each other or use centralized systems to store neuron maps Some experimental systems have been run on networked computer though at this time there appear to be no large scale project This fact limits the scope of these systems dramatically much in the way that isolated computers are limited by lack of access to data resources available via the net

Global Adaptive Neural Entropy Translation Systems (GANETS) There are many research groups working worldwide on the task of machine translation, these legacy efforts ignore the message entropy and work on language pairs These systems therefore are inherently inaccurate and require complex, unwieldy rules structures which are bound to language pairs, not universal My proposal to use entropy based translation is a completely different approach to this rules based system, moving instead to an understanding based knowledge system I propose using a centralized data structure built on top of NLSO via ComChatter.com servers so there's a common reference point for neural maps and resource data The other major shift from traditional systems is designing the neural systems to run locally and over LAN/WAN links to form a single unified neural entity thus taking advantage of the vast computing power and human guidance for  unprecedented learning & translating capability

There are several stages to achieving the result as follows

Language identification Language identification is undertaken using various techniques such as character  set recognition & spell checks

Place Name (address) recognition, transliteration & translation Place names are easily recognized by humans, not so with machines, a word is just letters devoid of meaning, our GEO database makes it possible to recognize place names, therefore conveying meaning to the translation engine Many names are not translated, instead they are transliterated to the target writing system

Name, Trademark, Product, Service & Company name recognition & transliteration As with place names, recognition personal names, trademarks, products, services & company names carries meaning which is unavailable to legacy translation methods, normally these will be transliterated between writing systems, if not carried through as-is Personal names are matched with probably statistics to differentiate between family names, first names & gender Though it's possible to list names with racial, religious & national probability, this is avoided to circumvent contentious & divisive use of data, in any event it does not constructively contribute to the translation task These names will normally be transliterated between writing systems as opposed to translated

Currency, Time/Date, Phone numbers, Url's, Email addresses & Postcodes Relatively simple tests will recognized these in their respective classes with a high level of confidence, they can be verified using a series of simple tests, DNS lookup on a Url, DNS MX record to verify an email, postcode matching, etc. When looking at the recognition of Place names, Real names, Trademarks, etc. it rapidly becomes apparent how essential in the data systems are to rapid recognition of the elements within the text

Describing words This section is partially repeated from the Language support systems development document by Ivan P. Gan for convenience The reason this section is so important is the meaning it gives to each word Consider Marmalade, Apple, Bicycle & Rock, what do these words mean? What would they mean if I spoke to you for the first time or if you had no knowledge of English? Exactly nothing, devoid of purpose & meaning & it is this which is the defining difference with my research, we give the words meaning by classifying & describing them The word apple  we associate with fruit, food, apple tree, living natural organism, round approx 4 to 8 cm diameter, therefore we understand what it refers to The same with Rock etc. and in this is the entropy we need to effectively translate Car=Voiture, or does it? Voiture actually means vehicle & car is a specific type of vehicle so there's an inconsistency in the entropy conveyed during translation and a resultant data loss! Consider the following phrases, “I want a blue car”, “I will have a blue car”, “Give me the blue car”, “The blue car is the one I want” The entropy for all four phrases is “I require blue car” More complex variations:”I am hungry”, “I want food”, “I want to eat”, “Feed me”, “Give me food”, “Some food please”, “I want dinner”, “I would like a meal” The entropy for each variation is essentially “I am hungry” From this contrived example one can see the scale of the problem and why humans so far have beaten machines in translation, one also sees the importance of entropy as a means of translation The more accurate the entropy to begin with, the more accurate the resultant translation, this can only be learned over time through machine experience Traditional approaches operate in isolation, the use of centralized database's via networks combines the learning experience of all systems

There are numerous entries for each word in our database, namely Word Usage % of overall usage Rank Oxford style descriptive entry Say-As guide for speech synthesis systems Gender, Neutral, Masculine, Feminine Voice recording of both male & female native speakers of the language Related words Singular/Plural terms Grouping, Greetings, Business, Social, etc. Classification, some words can have more than one classification Common confusables & context variable words Context probability entries (helps identify cardiac as belonging to a medical context, titanium as metallurgy:20, engineering:70) The numeric weight value associated with each context helps identify the overall context of a document, weights are adjusted as part of the learning process both on a local and on a global basis

The classification of words can include Entity o	Physical, Cat, Rock, Water, etc o	Conceptual, Program, Song, Light, Radiation, Energy, Ghost Descriptive, Blue, Hard, Soft, Wet, Noisy, Hot Time/Chronology, First, Last, Now, Later, Future, Present, Past Action, Run, Fly, Walk, Speak, Drive, Dance, Add, Subtract, Divide Quantities, Number, More, Less, Higher, Lots,Few, Hotter Logical & Conditional, True, False, If, Else, And, Not, Neither, Nor, Any, Either Emotional, Love, Hate, Fear, Like, Loathe Location (this is also a real name and a physical entity) Real names/Titles (can also be a location, physical entity or conceptual entity), also defines Male/Female, First/Family name probability fields Acronyms & abbreviations (can refer to any other classification) Singular/Plural/Multiple

Entropy tables & the NLSO database The phrase bank we already have will in due course be scanned for entropy & the results entered in an entropy table with an entropy reference entry placed in reference to the mnemonics table This approach allows multiple phrases to reference a single entropy, it also allows better cross matching of phrases supplied via human translators

The Neural Translator The translator is a learning, self aligning network & during the initial phrases we guide that learning to create the initial rules & weights Once basic learning is achieved we feed the system our entire English language NLSO database to re-enforce rules & weight values Human correction & training will be used to correct entropy errors, no attempt at this stage will be made to do any translation Re-scan to improve entropy quality and flag differences against the first pass, with human interaction to guide the network, multiple passes may be made After the initial results are verified we task the computer with learning grammars, & entropy extraction  from other languages using the translations already in our database, entropy conflicts are again highlighted for correction After successful training of the initial network we begin live trails

The 1st phase of any translation is identifying the input text, this verifies the language, character set, spelling & grammar of the source data The 2nd phase in parsing the data is to associate the words with their respective definition entries, identifying Names, Places, Dates, etc. and making them for transliteration or re-structuring/re-formatting where appropriate During 2nd phase scanning, context probability data is calculated to assist the translator in creating the correct entropy & gender relationships Phase 2 scans a substantial portion of a large document or an entire small document to gain a better context picture prior to engaging the 3rd phase Phase 3 is what I call SOP. Subject, Object, Predicate, these can be nested in complex sentences! During phase 4 neural networks then attempt Boolean minimization & entropy extraction to get an accurate image of the message meaning The target language translator looks for entropy matches against the known database & uses the reference entry associated if found If no entry exists for the target language the output stage then creates a phrase based on the input & context with respect to gender relationships if defined during input scanning, or in the case of emails & instant messages if the recipient's gender is known, it overrides the  calculated gender After translation the output is fed back to the system using phases 2, 3 & 4 to extract the entropy & flag errors, it is this phase which improves the learning and accuracy of the overall system Phase 1 is redundant for this purpose as the spelling & grammar output will be derived from the same rules as Phase 1 analysis anyhow

Large scale deployment will begin with inclusion of GANETS within ChatterBox application, as a server extension for PHP & NLSO servers and as a module for software developers The entire system, prior to development will be modified to use all connected clients as part of an extended network so that the learning capability vastly exceeds any individual systems Language specific neural maps will be fed back to the central server in much the same way the NLSO system is designed to operate, thus providing initialization data on demand for clients as required (Choigan (talk) 22:00, 29 June 2009 (UTC))

Proposal for Advanced entropy based machine translation using Adaptive Neural Networks by Ivan P Gan
I, Ivan P. GAn, am the author of this article, Proposal for Advanced entropy based machine translation using Adaptive Neural Networks, and I release its content under the terms of the GNU Free Documentation Licens

Proposal for Advanced entropy based machine translation using Adaptive Neural Networks  by Ivan P Gan

This document is aimed at a technology audience & as such assumes a high level of knowledge with the reader of software design, web design, translation memory systems, RDBMS, networks, communication protocols, server systems, i18n, l10n, g11n, character encoding methods & practices along with some knowledge of adaptive neural networks This document should be read in conjunction with the Language support systems development document by Ivan P. Gan

NLSO/LSF means Neutral Language Support Objects / Language Support Framework Language Neural defines a web site or software designed for generic Language support having no inherent dependency on any particular language and having a support framework to allow language switching at run time without re-compilation or modification of the software itself The UTF-8 standard describes a method of computer encoding and storage of language dependent character information which is capable of extending well beyond the Latin alphabet UTF-16, UTF-32, UCS-2 are also character encoding standards In this context the supported character sets are UTF-8 for web design applications, UTF-8 & UTF-16 for applications developed with the Lazarus Compiler Neural networks come in many forms Analog, using op-amps, resistors etc. Soft Digital, using various techniques from simple discriminators to fully adaptive complex networks implemented in software Firm Digital, implemented using programmable logic arrays Hard Digital, implemented using application specific chips designed to create digital neural networks Most implementations run as isolated systems of very limited capacity on a single machine such as those in Optical Character Recognition and do not learn from each other or use centralized systems to store neuron maps Some experimental systems have been run on networked computer though at this time there appear to be no large scale project This fact limits the scope of these systems dramatically much in the way that isolated computers are limited by lack of access to data resources available via the net

Global Adaptive Neural Entropy Translation Systems (GANETS) There are many research groups working worldwide on the task of machine translation, these legacy efforts ignore the message entropy and work on language pairs These systems therefore are inherently inaccurate and require complex, unwieldy rules structures which are bound to language pairs, not universal My proposal to use entropy based translation is a completely different approach to this rules based system, moving instead to an understanding based knowledge system I propose using a centralized data structure built on top of NLSO via ComChatter.com servers so there's a common reference point for neural maps and resource data The other major shift from traditional systems is designing the neural systems to run locally and over LAN/WAN links to form a single unified neural entity thus taking advantage of the vast computing power and human guidance for  unprecedented learning & translating capability

There are several stages to achieving the result as follows

Language identification Language identification is undertaken using various techniques such as character  set recognition & spell checks

Place Name (address) recognition, transliteration & translation Place names are easily recognized by humans, not so with machines, a word is just letters devoid of meaning, our GEO database makes it possible to recognize place names, therefore conveying meaning to the translation engine Many names are not translated, instead they are transliterated to the target writing system

Name, Trademark, Product, Service & Company name recognition & transliteration As with place names, recognition personal names, trademarks, products, services & company names carries meaning which is unavailable to legacy translation methods, normally these will be transliterated between writing systems, if not carried through as-is Personal names are matched with probably statistics to differentiate between family names, first names & gender Though it's possible to list names with racial, religious & national probability, this is avoided to circumvent contentious & divisive use of data, in any event it does not constructively contribute to the translation task These names will normally be transliterated between writing systems as opposed to translated

Currency, Time/Date, Phone numbers, Url's, Email addresses & Postcodes Relatively simple tests will recognized these in their respective classes with a high level of confidence, they can be verified using a series of simple tests, DNS lookup on a Url, DNS MX record to verify an email, postcode matching, etc. When looking at the recognition of Place names, Real names, Trademarks, etc. it rapidly becomes apparent how essential in the data systems are to rapid recognition of the elements within the text

Describing words This section is partially repeated from the Language support systems development document by Ivan P. Gan for convenience The reason this section is so important is the meaning it gives to each word Consider Marmalade, Apple, Bicycle & Rock, what do these words mean? What would they mean if I spoke to you for the first time or if you had no knowledge of English? Exactly nothing, devoid of purpose & meaning & it is this which is the defining difference with my research, we give the words meaning by classifying & describing them The word apple  we associate with fruit, food, apple tree, living natural organism, round approx 4 to 8 cm diameter, therefore we understand what it refers to The same with Rock etc. and in this is the entropy we need to effectively translate Car=Voiture, or does it? Voiture actually means vehicle & car is a specific type of vehicle so there's an inconsistency in the entropy conveyed during translation and a resultant data loss! Consider the following phrases, “I want a blue car”, “I will have a blue car”, “Give me the blue car”, “The blue car is the one I want” The entropy for all four phrases is “I require blue car” More complex variations:”I am hungry”, “I want food”, “I want to eat”, “Feed me”, “Give me food”, “Some food please”, “I want dinner”, “I would like a meal” The entropy for each variation is essentially “I am hungry” From this contrived example one can see the scale of the problem and why humans so far have beaten machines in translation, one also sees the importance of entropy as a means of translation The more accurate the entropy to begin with, the more accurate the resultant translation, this can only be learned over time through machine experience Traditional approaches operate in isolation, the use of centralized database's via networks combines the learning experience of all systems

There are numerous entries for each word in our database, namely Word Usage % of overall usage Rank Oxford style descriptive entry Say-As guide for speech synthesis systems Gender, Neutral, Masculine, Feminine Voice recording of both male & female native speakers of the language Related words Singular/Plural terms Grouping, Greetings, Business, Social, etc. Classification, some words can have more than one classification Common confusables & context variable words Context probability entries (helps identify cardiac as belonging to a medical context, titanium as metallurgy:20, engineering:70) The numeric weight value associated with each context helps identify the overall context of a document, weights are adjusted as part of the learning process both on a local and on a global basis

The classification of words can include Entity o	Physical, Cat, Rock, Water, etc o	Conceptual, Program, Song, Light, Radiation, Energy, Ghost Descriptive, Blue, Hard, Soft, Wet, Noisy, Hot Time/Chronology, First, Last, Now, Later, Future, Present, Past Action, Run, Fly, Walk, Speak, Drive, Dance, Add, Subtract, Divide Quantities, Number, More, Less, Higher, Lots,Few, Hotter Logical & Conditional, True, False, If, Else, And, Not, Neither, Nor, Any, Either Emotional, Love, Hate, Fear, Like, Loathe Location (this is also a real name and a physical entity) Real names/Titles (can also be a location, physical entity or conceptual entity), also defines Male/Female, First/Family name probability fields Acronyms & abbreviations (can refer to any other classification) Singular/Plural/Multiple

Entropy tables & the NLSO database The phrase bank we already have will in due course be scanned for entropy & the results entered in an entropy table with an entropy reference entry placed in reference to the mnemonics table This approach allows multiple phrases to reference a single entropy, it also allows better cross matching of phrases supplied via human translators

The Neural Translator The translator is a learning, self aligning network & during the initial phrases we guide that learning to create the initial rules & weights Once basic learning is achieved we feed the system our entire English language NLSO database to re-enforce rules & weight values Human correction & training will be used to correct entropy errors, no attempt at this stage will be made to do any translation Re-scan to improve entropy quality and flag differences against the first pass, with human interaction to guide the network, multiple passes may be made After the initial results are verified we task the computer with learning grammars, & entropy extraction  from other languages using the translations already in our database, entropy conflicts are again highlighted for correction After successful training of the initial network we begin live trails

The 1st phase of any translation is identifying the input text, this verifies the language, character set, spelling & grammar of the source data The 2nd phase in parsing the data is to associate the words with their respective definition entries, identifying Names, Places, Dates, etc. and making them for transliteration or re-structuring/re-formatting where appropriate During 2nd phase scanning, context probability data is calculated to assist the translator in creating the correct entropy & gender relationships Phase 2 scans a substantial portion of a large document or an entire small document to gain a better context picture prior to engaging the 3rd phase Phase 3 is what I call SOP. Subject, Object, Predicate, these can be nested in complex sentences! During phase 4 neural networks then attempt Boolean minimization & entropy extraction to get an accurate image of the message meaning The target language translator looks for entropy matches against the known database & uses the reference entry associated if found If no entry exists for the target language the output stage then creates a phrase based on the input & context with respect to gender relationships if defined during input scanning, or in the case of emails & instant messages if the recipient's gender is known, it overrides the  calculated gender After translation the output is fed back to the system using phases 2, 3 & 4 to extract the entropy & flag errors, it is this phase which improves the learning and accuracy of the overall system Phase 1 is redundant for this purpose as the spelling & grammar output will be derived from the same rules as Phase 1 analysis anyhow

Large scale deployment will begin with inclusion of GANETS within ChatterBox application, as a server extension for PHP & NLSO servers and as a module for software developers The entire system, prior to development will be modified to use all connected clients as part of an extended network so that the learning capability vastly exceeds any individual systems Language specific neural maps will be fed back to the central server in much the same way the NLSO system is designed to operate, thus providing initialization data on demand for clients as required