User:Imonban/sandbox

= Semantic Annotation of Anatomy = Starting from the beginning of medical history, two conceptually different modalities have been adopted in parallel for representing the human anatomy: spatial data depicting the appearance of anatomical parts (e.g., 2D/3D images, 3D models) and symbolic information producing a descriptive documentation of anatomy (e.g., taxonomies, ontologies, reports, clinical notes, electronic patient-records). Both visual and symbolic representations have equal importance for depicting anatomy, and the optimal solution goes in the direction of a tighter integration of the two, which can be realized through the annotation of spatial data with symbolic information The classical atlases, up to the digital atlas annotation of spatial data, provide an optimal understanding of anatomical knowledge. Moreover, the integration between spatial data and symbolic knowledge supports an effortless dynamic navigation in the knowledge space, thus creating advanced pathways in the modern clinical society and stimulating new medical reasoning and correlations finding.

The process of tagging single/multiple texts (metadata) with the spatial data, which may represent semantics, comments, links, and any other textual information, is known as linguistic annotation and can be classified as free-text-based and knowledge-driven annotation.

Free-text annotation
The users are free to tag an object with any keyword he/she has in his/her mind with spatial data, e.g., notes, observation. Most of the existing medical image visualization software (OsiriX, Yadiv, 3DSlicer) allow the user to manually or automatically mark the region-of-interest (ROI) inside the images and to tag it with user-defined observations (free-text). However, mostly, manually added keywords are unable to capture the objective meaning of the targeted data. In fact, the textual abbreviation reflects the perspective and interest of the user only, without placing the annotation in a diagnostic work flow that could be shared with other clinicians. Additionally, annotation expressed in natural language is influenced by several factors, such as language or context, and can be limited or ambiguous. Indeed, it is convenient to use the free text annotation in an isolated interpretation environment, but it may not provide meaningful results in a network-based collaborative scenario.

Knowledge-driven annotation
The terms are fixed and defined by an underlying formalized knowledge, e.g., taxonomy, ontology. The formalized semantics of the annotation ensures a common and shared understanding, restricts the use of an exhaustive set of terms, and allows the annotation only with the “controlled vocabulary”. Indeed, the main difference among the existing methods is the trade-off between flexibility and meaningful. Previous work associates virtual body models, generated from the Visible Human dataset, with a knowledge base of descriptive information (symbolic), which permits an intuitive method for anatomy training in a distributed environment. However, semantic annotations have potentiality to go beyond simply portraying the anatomy for training and can bridge the ambiguity of the natural language by expressing notions and their computational representation in a formal language. Moreover, encoding how data items are related and how these relations can be evaluated automatically supports the definition of complex filters and search operations. For example, a MRI data set annotated with conceptual tags “FMA:Knee joint” can be interpreted as – it captures the spatial representation of “FMA:Knee joint” that has constitutional parts, such as “FMA:Lateral meniscus”, “FMA: Patellar Ligament”. These facts imply that the visual content of MRI data set also represents “FMA:Lateral meniscus” and “FMA: Patellar Ligament”. Indeed, the annotation refers not only to the textual tag but also to the concept “FMA:Knee joint”, which has formal definition in FMA. The combination of controlled annotation of patient data with a richer vocabulary and a sophisticated reasoning policy can dramatically increase the performance of data management, information navigation, and data retrieval system.

Historical background
From the beginning of medical history, anatomical knowledge has been illustrated either in a symbolic manner (e.g., names and synonyms of anatomical structures, functionalities, classification, definitions, spatial relationships) or in a visual way (e.g., sketches, drawing, physical samples, images). The first historical evidence of the systematic study of human anatomy is “Egyptian EbersPapyras” (1600BC), which described the human anatomy in a symbolic way with the help of formulas (almost 700) and remedies. Herophilus (335–280 BC) and Erasistratus (304–250 BC) were the first who studied anatomy via visual means by assembling a first human skeleton for osteology. With Claudius Galen, a new age in anatomy began, where he attempted to structure conceptual knowledge of anatomy in his book “De ossibus ad tirones”, by combining and compiling all existing sources of information. The next milestone of anatomical study was Vesalius's work: “De humani corporis fabrica”(1543), where emphasis was given to the “anatomical view of body”, by representing internal organs and their functioning in a three-dimensional space by realistic sketches. Afterwards, many famous artists studied anatomy and published their works on anatomy sketching, e.g., Leonardo da Vinci, Michelangelo, Rembrandt. The use of sketching became one of the preferred ways of transferring anatomical knowledge, but symbolic representations were equally indispensable. Anatomical illustration via classic atlases was a successful attempt that provides a complementary way of expressing anatomy: pure visualization through sketches emphasizes immediacy and direct access to information, whereas annotation in natural language targets the expressibility and communicability of anatomical knowledge. In 1895, the discovery of X-ray provided the first means to capture a snapshot of the interior of an in vivo body without dissection, and the modern phase of anatomical knowledge representation initiated. However, a proper interpretation of these radiographic snapshots needs particular expertise in anatomy. The history points out that anatomy is indeed highly visual in nature; at the same time, it requires a highly descriptive documentation for optimal understanding. Over the ages, classical atlases have been the popular medium for conveying canonical knowledge; starting from the beginning of the digital age, patient-specific anatomy has been represented via two complementary approaches, relying either on formalized knowledge (reports, clinical notes, electronic patient-records), or on multi-modal medical imaging (visual content, X-ray, CT, MRI). Less endeavors have been noticed in terms of achieving a comprehensive integration between these two modalities for creating a patient-specific atlas.

Existing tools
We have selected some suitable approaches that were published between 2005 and 2015. Each of the solutions shows a specific way to link visual 3D data with symbolic knowledge on anatomy. Besides realistic atlas creation for anatomy training, a main challenge is to devise methods that can integrate patient-specific spatial data and symbolic information to support knowledge-driven clinical trials.
 * 1) Google body
 * 2) Bio-digital human
 * 3) Medical info service
 * 4) Voxel Man
 * 5) W3C-VBS
 * 6) Bodyparts 3D
 * 7) Medico
 * 8) epad (ipad)
 * 9) SemAnatomy3D

__NOINDEX__ __NEWSECTIONLINK__ === Research Challenges === Performing a comprehensive semantic annotation for all medical datasets is beyond the human capacity due to its massive volume. However, the efficient combination of man and machine can improve the speed and efficiency of annotation, and can offer ultimate understanding and utilization of anatomical data. In other words, semantic annotation softwares which extract the implicit content of the input data, parse all available symbolic information about patient history, and take into account the formalized medical knowledge, are becoming more and more relevant in this context. However, such automatic methods heavily count on the availability of solutions to deal with the “gap” between computational and semantic features, inter-subject variability, and the enormous amount of accessible information.