FLEXiBLE deduplication is a revolutionary compression process that optimizes storage usage and replication performance by comparing and storing only one file when the file is found as duplicate. Deduplication is the appliance’s heart that flawlessly streamlines the replication processes. The main data deduplication process consist of storing each unique data sequence only once. By discarding duplicate data elements, the deduplication engine sends only the changed data bits to the storage array. This way, the amount of transmitted data is significantly reduced and the speed of data backups increased. Data deduplication is also used for replication purposes in order to send only the unique blocks from the source to the target and if a block is not unique, it will send only the pointer to the right unique blocks to reduce bandwidth utilization.
Deduplication at the source removes redundant blocks before sending any data to the backup target (client or server). Source-based deduplication reduces bandwidth and storage use and it doesn’t require any additional hardware to deduplicate on. At the target, deduplication is network-based: backups are sent through the network to a remote location. Even if it requires more network bandwidth, the target deduplication is more efficient than source deduplication, particularly for big data sets.