User:Psneog/Data Reduction

Data reduction in data mining refers to the transformation of large volume of data into smaller representative subsets while preserving integrity of the original data set as much as possible. This may be required for many reasons such as minimization of processing time and resources, economical storage, reduced testing and debugging time for data processing algorithms and so on. On the negative side, introduction of errors in this early stage may get severely magnified in the subsequent data mining tasks resulting in distorted results. Information loss due to the very process itself is also a matter of critical concern.

An example in astronomy is the data reduction in the Kepler satellite. This satellite records 95-megapixel images once every six seconds, generating tens of megabytes of data per second, which is orders of magnitudes more than the downlink bandwidth of 550 KBps. The on-board data reduction encompasses co-adding the raw frames for thirty minutes, reducing the bandwidth by a factor of 300. Furthermore, interesting targets are pre-selected and only the relevant pixels are processed, which is 6% of the total. This reduced data is then sent to Earth where it is processed further.

is the transformation of numerical or alphabetical digital information derived empirically or experimentally into a corrected, ordered, and simplified form. The basic concept is the reduction of multitudinous amounts of data down to the meaningful parts.

When information is derived from instrument readings there may also be a transformation from analog to digital form. When the data are already in digital form the 'reduction' of the data typically involves some editing, scaling, coding, sorting, collating, and producing tabular summaries. When the observations are discrete but the underlying phenomenon is continuous then smoothing and interpolation are often needed. Often the data reduction is undertaken in the presence of reading or measurement errors. Some idea of the nature of these errors is needed before the most likely value may be determined.

Best practices
These are common techniques used in data reduction.
 * Order by some aspect of size.
 * Table diagonalization, whereby rows and columns of tables are re-arranged to make patterns easier to see (refer to the diagram).
 * Round drastically to one, or at most two, effective digits (effective digits are ones that vary in that part of the data).
 * Use averages to provide a visual focus as well as a summary.
 * Use layout and labeling to guide the eye.
 * Remove Chartjunk, such as pictures and lines.
 * Give a brief verbal summary.