Cloud Data Backup
Also known as online backup, cloud backup is a method for backing up data where a copy of the data is sent over a proprietary or public network to an off-site storage. That storage is typically hosted by a third-party service provider, who then charges the backup customer a fee based on capacity, bandwidth or on the number of users. In the enterprise, the off-site storage might be proprietary, but the chargeback method would be similar.
This is any type of storage device that can be easily removed from a data center. This could include magnetic backup tape, removable hard drives or optical media. Removable media allows users to take a copy of data offsite for disaster recovery purposes.
Target deduplication removes redundant data from a backup transmission as it passes through an appliance seated between the source and the backup target. You can use any backup software that the device supports. While target deduplication reduces the amount of storage required, it does not reduce the amount of data that must be sent across a LAN or WAN during the backup.
This process removes redundancies from data before it is sent to the backup target. Source deduplication products offer reduced bandwidth and storage usage and many source deduplication products support automation for offsite copies. That said, source dedupe is slower than target dedupe, especially for larger amounts of data. This can result in increased overall backup times because of the increased workload on servers.
Centralized backup automatically replicates data from remote sites, sending it over a network to a main (centralized) location for storage. Centralized backup can be used to automate backups at remote sites and lower backup administration costs. It serves as an alternative to local backup, which requires the maintenance of tape libraries at remote sites. While it solves potential security issues involved with loose tape media, centralized backup can make backups and restores take longer and tie up a network’s bandwidth.
Encryption Key Management
As you might suspect from looking at the term, encryption key management is the administration of tasks involved with protecting, sharing, backing up and organizing encryption keys. Thanks to high-profile data losses and regulatory compliance requirements, encryption in the enterprise is a fast-rising trend. Unfortunately, a single enterprise might use dozens of different, perhaps incompatible encryption tools. This can lead to thousands of encryption keys, each of which must be securely stored, adequately protected and easily retrievable.
VMware backup involves the copying of data on a virtual machine in a VMware environment to prevent data loss. VMware backup specifically, and virtual server backup in general, is a common challenge for storage and backup administrators. VMware backup can be completed with conventional backup software, but while it is the most straightforward approach, it can also lead to resource contention. The additional resources needed to complete a backup can negatively impact the performance of the virtual machines on the physical server being backed up. Backup software vendors were slow to market with VMware backup solutions, but the VMware APIs for Data Protection sparked development in that area. There are also a number of VMware specific backup tools available today.
Incremental backups only copy files that have changed since the last backup. For example, if a full backup takes place on a Wednesday, Thursday’s incremental will back up all changed files since Wednesday’s backup. However, Friday’s incremental will only back up files that have changed since Thursday’s incremental backup. The biggest plus of incremental backups? Fewer files are backed up daily, allowing for shorter backup windows. The biggest negative? During a complete restore, the latest full and all subsequential backups must be restored. This can take a significant amount of time.
Again, as you might suspect, remote replication copies production data to a device at a remote location for data protection or disaster recovery purposes. It can be either synchronous or asynchronous. Synchronous replication writes data to the primary and secondary sites simultaneously. Asynchronous replication presents a delay before the data is written to the secondary site. Because asynchronous replication is designed to work over longer distances and requires less bandwidth, it is a better option for disaster recovery. However, asynchronous replication risks the loss of data during a system outage because data at the target device isn’t synchronized with the source data.
Data archiving moves data no longer actively used to a separate data storage device for long-term retention. These data archives contain older data that may prove necessary for future reference, along with data organizations must keep for regulatory compliance. Data archives are indexed and have search capabilities so that files, and their parts, can be easily located and retrieved.