User:Fstumpfi/sandbox

Extraction from natural language sources
The biggest portion of information contained in business documents, even about 80%, is encoded in natural language and therefore unstructured. Because unstructured data are rather badly suited to extract knowledge from it, it is necessary to apply more complex methods, which nevertheless generally supply worse results, than it would be possible for structured data. The massive acquisition of extracted knowledge should compensate the increased complexity and decreased quality of extraction. In the following, natural language sources are understood as sources of information, where the data are given in an unstructured fashion as plain text. But the text can be additionally embedded in a markup document (e. g. HTML document), because the most of the systems remove the markup elements automatically.

Traditional Information Extraction (IE)
The Traditional Information Extraction is a technology of natural language processing, which extracts information from typically natural language texts and structures these in a suitable manner. The kinds of information to be identified must be specified in a model before beginning the process, which is why the whole process of Traditional Information Extraction is domain dependent. The IE is split in the following five subtasks.


 * Named Entity Recognition (NER)
 * Coreference Resolution (CO)
 * Template Element Construction (TE)
 * Template Relation Construction (TR)
 * Template Scenario Production (ST)

The task of Named Entity Recognition is to recognize and to categorize all named entities contained in a text (assignment of a named entity to a predefined category). This works by application of grammar based methods or statistical models.

The Coreference Resolution identifies equivalent, by NER recognized, entities within a text. There are two relevant kinds of equivalence relationships. The first one relates to the relationship between two different represented entities (e. g. IBM Europe and IBM) and the second one to the relationship between an entity and their anaphoric references (e. g. it and IBM). Both kinds should be recognized by the Coreference Resolution.

At the Template Element Construction the IE system identifies descriptive properties of entities, recognized by NER and CO. These properties correspond to ordinary qualities like red or big.

The Template Relation Construction identifies relations, which exist between the template elements. These relations can be of several kinds, such as works-for or located-in, with the restriction, that both domain and range correspond to entities.

In the Template Scenario Production events, which are described in the text, will be identified and structured with respect to the entities, recognized by NER and CO and relations, identified by TR.

Ontology-Based Information Extraction (OBIE)
The Ontology-Based Information Extraction is a subfield of Information Extraction, with which at least one ontology is used to guide the process of information extraction from natural language text. Though, the OBIE system uses methods of Traditional Information Extraction to identify concepts, instances and relations of the used ontologies in the text, which will be structured to an ontology after the process. Thus, the input ontologies constitute the model of information to be extracted.

Ontology Learning (OL)
With Ontology Learning whole ontologies from natural language text are semi-automatic extracted. According to this it can be applied supportingly at ontology engineering. It is split in the following seven subtasks, which must not be supported by all OL systems.


 * Domain Terminology Extraction
 * Concept Discovery
 * Concept Hierarchy Derivation
 * Learning of non-taxonomic relations
 * Rule Discovery
 * Ontology Population
 * Concept Hierarchy Extension

At the Domain Terminology Extraction domain-specific terms are extracted, which are used in the following Concept Discovery to derive concepts. Relevant terms can be determined e. g. by calculation of the TF/IDF values or by application of the C-value / NC-value method. The resulted list of terms has to be filtered by a domain expert. Subsequent, similarly to Coreference Resolution in IE, the OL system determines synonyms, because they share the same meaning and therefore correspond to the same concept. The most common methods therefor are clustering and the application of statistical similarity measures.

In the Concept Discovery terms are grouped to meaning bearing units, which correspond to an abstraction of the world and therefore to concepts. The grouped terms are these domain-specific terms and their synonyms, which were identified in the Domain Terminology Extraction.

In the Concept Hierarchy Derivation the OL system tries to arrange the extracted concepts in a taxonomic structure. This is mostly achieved by unsupervised hierarchical clustering methods. Because the result of such methods is often noisy, a supervision, e. g. by evaluation by the user, is integrated. A further method for the derivation of a concept hierarchy exists in the usage of several patterns, which should indicate a sub- or supersumption relationship. Pattern like “X, what is a Y” or “X is a Y” indicate, that X is a subclass of Y. Such pattern can be analyzed efficiently, but they occur too infrequent, to extract enough sub- or supersumption relationships. Instead bootstrapping methods are developed, which learn these patterns automatically and therefore ensure a higher coverage.

At the Learning of non-taxonomic relations relationships are extracted, which don´t express any sub- or supersumption. Such relationships are e. g. works-for or located-in. There are two common approaches to solve this subtask. The first one bases upon the extraction of anonymous associations, which are named appropriate in a second step. The second approach extracts verbs, which indicate a relationship between the entities, represented by the surrounding words. But the result of both approaches has to be evaluated by an ontologist.

In the Rule Discovery axioms (formal description of concepts) are generated for the extracted concepts. This can be achieved, e. g., by analyzing the syntactic structure of a natural language definition and the application of transformation rules on the resulting dependency tree. The result of this process is a list of axioms, which is afterward comprehended to a concept description. This one has to be evaluated by an ontologist.

At the Ontology Population the ontology is augmented with instances of concepts and properties. For the augmentation with instances of concepts methods, which are based on the matching of lexico-syntactic patterns, are used. Instances of properties are added by application of bootstrapping methods, which collect relationtuples.

In the Concept Hierarchy Extension the OL system tries to extend the taxonomic structure of an existing ontology with further concepts. This can be realized supervised by an trained classifier or unsupervised by the application of similarity measures.

Semantic Annotation (SA)
At the Semantic Annotation of natural language text this one is augmented with metadata (often represented in RDFa), which should make the semantics of contained terms machine-understandable. At this process, which is generally semi-automatic, knowledge is extracted in the sense, that a link between lexical terms and e. g. concepts from ontologies is established. Thus, the knowledge is also won, which meaning of a term in the processed context was intended. The semi-automatic Semantic Annotation can be split in the following two subtasks.


 * Terminology Extraction
 * Entity Linking

At the Terminology Extraction lexical terms from the text are extracted. For this purpose a tokenizer determines at first the word boundaries and solves abbreviations. Afterward terms from the text, which correspond to a concept, are extracted with the help of a domain-specific lexicon to link these at Entity Linking.

At Entity Linking a link between the extracted lexical terms from the source text and the concepts from an ontology is established. For this, candidate-concepts are detected appropriate to the several meanings of a term with the help of a lexicon. Closing, the context of the terms is analyzed, to determine the most appropriate disambiguation, to assign the term to the correct concept.

Tools
The following criteria can be used to categorize tools, which extract knowledge from natural language text.

The following table characterizes some tools for Knowledge Extraction from natural language sources.