User:Markf129/Earth sciences data format interoperability

When studying the Earth sciences by observation or analytical models, it is often a challenge for both the user and collector on how to best organize and store the vast amount of information available. Different organizations may have specific technical goals, timeline constraints, or model constraints that often drive new file conventions, distributions techniques, and architectures. While developing new solutions sometimes solves short term goals, it often causes more complex long term problems when standards are not adhered to. In some cases, science data has been migrating less rapidly to a standards-based approach. Because of these issues, interoperability of data for collaboration is critical in building a continued quantitative understanding of the sciences.

Interoperability of observational or model data must be easy and transparent, without having to reformat the data, write special tools to read or extract the data, or rely on specific proprietary software. If common formats are adhered to, many benefits would occur. First, it would promote the exchange of models and relevant science data. Second, observational data could be scaled and compared more easily to models. And third, it would eliminate confusion and unnecessary format conversions. Perhaps the most important reason is the latter, as considerable time can be spent converting between the different data formats. Therefore, it is important to understand the features and limitations in each.

Overview and definition
A data model (e.g. NetCDF) describes structured data by providing an unambiguous and neutral view on how the data is organized. Formally, a data model has three parts: A file format defines how data is encoded for storage using a defined structure such as chunk, directory based, or unstructured. Usually the file format is easily identified by the file name extension (e.g. .jpg, .bufr). Thus, the data model describes how the data is organized, and the file format how the data is stored. Furthermore, conventions are used to describe what data types, formats, and design principles are applied for a given data model and/or format (e.g. Climate and Forecast Metadata Conventions). By identifying these three elements, data can be accurately described.
 * 1) A collection of data objects such as lists, tables, and relations.
 * 2) A collection of operations that can be applied to the objects such as retrieval, update, subsetting, and averaging.
 * 3) A collection of integrity rules that define the legal states (set of values) or changes of state (operations on values).

For example, data models contain datasets such as dimensions, variables, types, and attributes. Some models have the ability to even logically put these sets into groups. These components can be used together to capture the meaning of data and relations among data fields in an array-oriented dataset. In contrast to variables, which are intended for bulk data, attributes are intended for ancillary data or information about the data. Another difference between attributes and variables is that variables may be multidimensional. Attributes are all either scalars (single-valued) or vectors (a single, fixed dimension).

Interoperability requires that each dataset representation is understood at the core level for each model, so their relationships can be understood. In some cases, models may be inter-compatible simply due to a similar dataset.

Format overview
NetCDF is especially useful for gridded data and time series data, although it can be used with satellite swath data.

HDF is very useful in storing complex files with their associated metadata. HDF-EOS provides structural metadata at both the object and file level making it easier for client programs to read it. HDF-EOS defines certain kinds of earth science data objects, and specifies how to organize them in HDF4 and HDF5. HDF-EOS supports grid, swath, and point data.

GeoTIFF is a specialization of the TIFF format that incorporates geographic information embedded as tags within the file. The geographic information allows data in the TIFF formatted file to be displayed in geographically correct locations.

GRIB files contain one or more messages, or records with a single parameter and accompanying grid location (which can be a standard grid or user defined). Data is equally spaced at a defined latitude or longitutde step which is contained in the message. A single GRIB file can contain separate records for many different parameters. For examplem one file could contain humidity data for several elevations over several time periods as well as snow depth for the same elevations and time periods.

BUFR is the primary format used operationally on the World Meteorological Organization (WMO) Global Telecommunications System for real-time global exchange of weather and satellite observations. BUFR is a self-describing and is table-driven to encode a wide variety of meteorological data: land observations, radar data, climatological data, etc.

Data model relationships
It is important to recognize that any given application will have it's own data structure and size that may include variables, tables, arrays, meshes, etc. Each application must correctly map it's own structure to that of the data model. Each data model will typically include:


 * File - a contiguous string of bytes in a computer.
 * HDF5 and NetCDF-4 are similar
 * NetCDF-4 can read HDF5
 * HDF5 can read XDR based NetCDF-4
 * GRIB2 is backwards compatible with GRIB
 * Group - a collection of objects which could include groups.
 * HDF5 and NetCDF-4 use the same hierarchical concept, similar to the directory structure in Unix
 * GRIB groups are one field per message only
 * GRIB2 groups are more than one field, repeated, in a single message
 * Dimension - used to specify variable shapes, common grids, and coordinate systems.
 * NetCDF dimensions have a name and a length
 * HDF defines a dataset for descriptions, and a dataspace for length
 * Variable - an array of values of the same type.
 * NetCDF variables are used in the same context as HDF data elements
 * Dataset - a multidimensional array of elements, with attributes and other metadata.
 * NetCDF defines this as variables, dimensions, and attributes
 * HDF defines this as data elements (variables), and dimensions and attributes are described by the datatype and dataspace
 * A GRIB dataset is a collection of self-containing records
 * Datatype - a description of a specific class of data element, including its storage layout as a pattern of bits.
 * HDF datatypes define the storage format of an element
 * NetCDF datatypes are defined in the variable, either as text or numeric
 * Dataspace - a description of the dimensions of a multidimensional array.
 * A HDF dataspace is a multidimensional array in terms of rank (a.k.a. dimensionality), current size, and maximum size
 * NetCDF defines the dimensions (scalar, vector, or matrix) in the variable as a shape
 * Attribute - a named data value associated with a group, dataset, or named datatype.
 * HDF5 and NetCDF use the same concept
 * Property List - a collection of parameters controlling options in the library model.

Some important notes surrounding attributes in the data formats. Global file attributes are written to NetCDF files by assigning attributes to the variable that references the file. HDF will typically store global attributes at the beginning of it's file. The GRIB format is a series of independent records with data points. However space, time, and even the origin of the data must sometimes be derived outside of the file (i.e. external tables). GRIB2 overcomes these challenges, and allows for more diversity in the records.

Data model representations
NetCDF is a simple format that works best with gridded or time series data. The NetCDF classic model (NetCDF 64-bit offset, NetCDF-4 classic) represents:
 * dimensions, variables, and attributes
 * variables also have attributes
 * may contain common grids

The NetCDF enhanced model represents:
 * classic model representations
 * unnamed groups
 * user defined data types

HDF offers a variety of data structures with an API to read and write the data. HDF is good at storing complicated files with their respective metadata, and use of compression for storing larger files. HDF-EOS implements additional data structures designed to facilitate access to Earth science data, such as geolocation information with data. The HDF4 model represents:
 * choice of 8 objects

The HDF5 model represents:
 * choice of dataset or group object
 * attributes or datasets to describe metadata

The HDF-EOS2 model:
 * includes HDF4 representations
 * supports data structures grid, point, and swath

The HDF-EOS5 model:
 * includes HDF5 representations
 * supports data structures grid, point, and swath

TIFF (Tagged Information File Format) is a raster file format for handling images and data within a single file. GeoTIFF is a specialized version of the TIFF format that included geographic information within the tags of the format. The GeoTIFF model:
 * use geoKeys to describe TIFF tags

The GRIB and GRIB2 models:
 * table driven
 * file format for meterological data in binary
 * header descriptions for data packing, definition, and data representation type

The BUFR model:
 * table driven
 * file format for meterological data in binary
 * sections include: indicator, identification, optional, description, data, and end

Coordinate systems
Georeferencing is establishing the relationship between raster or vector images, coordinates, and also when determining the spatial location of other geographical features. When translating between different data formats, it is often required to establish a common coordinate system reference. In some cases, additional reference information, such as a world file, may be needed in order to do the translation. For example, challenges occur when grid data is encoded in a "thinned" format, usually in the longitudinal dimension, where interoperability algorithms are needed. When used, translating between the formats will always have trade offs. There are various GIS tools available that can help transform image data to some geographic control framework, like ArcMap, PCI Geomatica, or ERDAS Imagine.


 * NetCDF
 * No standard for storing georeferencing, some options to use for translating include:
 * Metadata tag 'grid_mapping'
 * Latitude, Longitude grid array
 * Spatial_ref and Geo transform array
 * HDF
 * No standard for storing georeferencing, subdataset_type may contain swath data
 * HDF-EOS2 and HDF-EOS5
 * Geolocation and temporal information to spatial data
 * Not generally accessible to GIS community (i.e. convert to GeoTIFF)
 * GeoTIFF
 * Georeferencing may be contained within file
 * ESRI world file with MapInfo may be used
 * GRIB
 * Grid coordinates defined in description section
 * BUFR
 * Coordinates defined in element descriptor section

File formats
Data models must be stored or encoded in a specific file format. Each format will have options on what data types, attributes, dimensions, or variables that can be used. For a given file format, a brief overview of their respective capabilities are shown below.

Conventions
Conventions provide a definitive description of what the data values found in each variable represent. For example, a convention may include descriptions of spatial and temporal properties, grid cell bounds, or averaging methods. This enables users of files from different sources to decide which variables are comparable. A convention should support various data types and formats.

When designing a convention, certain principles are considered. Some principles may include metadata requirements, interpretation of the data, ease of use, descriptions, and naming.

Conversion techniques
When converting between the various formats, the translating software must assemble the data and records into similar variables, dimensions, and coordinates. In some cases, a format may not contain all the information needed to translate to the other format. For example, when converting from GRIB to NetCDF often all the needed GRIB dimensions are present. In order to assemble related records into NetCDF like variables, sometimes a single dimension must be used. In this case, the variable is given the same name as the NetCDF dimension.

Dimensions may be established by first sorting the given grid data into a coherent order. Only then, if a dimension is not present it will be absent in the conversion. In contrast, attributes such as the start time, may not change from record to record. In these cases, the same attribute value may be assigned to the subsequent variables.

It is a good practice to still convert data even when elements are missing, but warn the user of potential problems.

Conversion tables
Given the vast choices in representing data, the ability to quickly know if your data can be accessed, modified, or converted to a different format is useful. The tables below help provide a subset of answers to some of those questions. So there is no ambiguity, the data model, file format (or file extension), convention, and versions where appropriate are clearly defined in each cell by 3 lines.

For reading data, this conversion table provides information on the formats data can be translated to. Columns are shown as the destination, and rows as the source.

For writing data, this conversion table provides information on the formats data can be translated from. Columns are shown as the destination, and rows as the source.

Data type representations
For any given data stream there may be ambiguities regarding the appropriate structural data type to be used. As a general rule, the best way to resolve this ambiguity is to choose the most highly ordered data type that could describe the data.

The table below lists some of the structural data types, and their respective recommended data formats. The data formats are defined in three lines: the data model, file format, and convention.

Interoperability guidelines
Data interoperability is critical to integrate different models, tools, and perspectives in order to collaborate effectively. Data must be taken from multiple sources in order to study the Earth sciences as a system rather than individual components. In many cases the chosen data types are the natural consequence of the manner in which the data is collected. However, without some sort of strict standard or policy, the ability to utilize observations and model data diminishes. The next best alternative is to incorporate best practices or established conventions (such as in climatology the Climate and Forecast Metadata Conventions). For example, the Hierarchical Data Format (HDF) is the standard data format for all NASA Earth Observing System (EOS) data products.

The following list is not meant to be exhaustive, but best practices to include to improve interoperability.
 * 1) The use of simpler data models.
 * 2) The use of an established coordinate system or convention.