User:Backup Software

PC Backup Software and Data Deduplication (Part I)

The one thing that has definitely revolutionized how PC backup software works is Data deduplication. Gartner calls it “inarguably one of the most new important technologies in storage for the past decade.” So let’s take a detailed look at what it actually means. Data deduplication or Single Instancing essentially refers to the elimination of redundant data. In the deduplication process, duplicate data is deleted, leaving only one copy (single instance) of the data to be stored. However, indexing of all data is still retained should that data ever be required. For example, a typical email system might contain 100 instances of the same 1 MB file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy reducing storage and bandwidth demand to only 1 MB. The practical benefits of this technology depend upon various factors like: -      Point of Application: Source Vs Target -      Time of Application: Inline vs Post-Process -      Granularity: File vs Sub-File level -      Algorithm: Fixed size blocks Vs Variable length data segments This article is the first in a series that will attempt to explain how each of these factors defines the success of a PC backup. Target Vs Source-based Deduplication Target-based deduplication acts on the target data storage media. In this case the client is unmodified and not aware of any deduplication. The deduplication engine can embedded in the hardware array, which can be used as NAS/SAN device with deduplication capabilities. Alternatively it can also be offered as an independent software or hardware appliance which acts as intermediary between backup server and storage arrays. In both cases it improves only the storage utilization. On the contrary Source-based deduplication acts on the data at the source before it’s moved. A deduplication aware backup agent is installed on the client that backs up only unique data. The result is improved bandwidth and storage utilization. But, this imposes additional computational load on the backup client.