XCP-ng
Last updated
Last updated
In this strategy, the VM is exported as a single XVA bundle containing all of the data. Incremental backup is also supported. Data is transferred directly from the XenServer API without the need to set up anything on the hosts.
crash-consistent snapshot using hypervisor's API only for full backups
optionally quiesced snapshot can be done if enabled and guest tools installed inside - if quiesced snapshot has been failed we are doing regular one
optional application consistency using pre/post snapshot command execution
data export directly from the hypervisor using hypervisor's API - both full (XVA) and delta (VHD for each disk)
full backup (XVA) contains metadata
snapshot taken with full backup is kept on the hypervisor for the next incremental backup - if at least one schedule assigned to the VM has backup type set to incremental
incremental backups are cumulative (all data since last full backup)
restore recreates VM from XVA, and then applies changes from each incremental backup using Hypervisor APIs
In this strategy, the VM is exported using XenServer API (full backup) and the Network Block Device service (NBD, incremental backups) on the XenServer hosts. The CBT feature in Citrix XenServer 7.3+ may require an additional license. The resulting backup has separate files for each disk + metadata, so you also have the option to exclude specific drives.
Note: For full backups only you can still use this strategy without CBT enabled on the hypervisor.
crash-consistent snapshot using hypervisor's API
optionally quiesced snapshot can be done if enabled and guest tools installed inside - if quiesced snapshot has been failed we are doing regular one
optional application consistency using pre/post snapshot command execution
CBT enabled during full backup on each disk if it wasn't done earlier
metadata exported from API
full backup - each disk exported from API (RAW format)
incremental backup - each disk queried for changed blocks and which are exported over NBD
last snapshot kept on the hypervisor for the next incremental backup - if at least one schedule assigned to the VM has backup type set to incremental
restore recreates VM from metadata using API and imports merged chain of data for each disk using API
Citrix introduced the CBT mechanism in XenServer 7.3. In order to enable CBT backups, the following requirements must be met:
Citrix Hypervisor 7.3 (XCP-ng 7.4) or above must be used - note that CBT is a licensed feature
The NBD server must be enabled on the hypervisor
The NBD client and NBD module must be installed on vProtect Node (vprotect should take care of this automatically during installation)
When image-based backups (XVA) are used - vProtect restores VMs as templates and renames them appropriately after the restore
When separate disk backups are used:
if there is already a VM in the infrastructure with the UUID of the VM being restored (check present
flag in VM list) - vProtect restores it as a new VM (MAC addresses will be generated)
otherwise vProtect attempts to restore the original configuration including MAC addresses
Get the Network UUID that you intend to use for communication with vProtect - run on the XenServer shell:
For example: e16b4e34-47d4-9a6e-371b-65beb7252d69
Enable the NBD service on your hypervisor:
Note: This part is done by vProtect automatically during installation. The article may be helpful in case of problems with the NBD module.
vProtect comes with a pre-built RPM and modules for CentOS 7 distribution.
Go to the NBD directory:
Use yum
to install the NBD client:
If your Linux does not have the NBD module installed you may try to build one yourself (there is a script for Red Hat based distributions which downloads the kernel, enables the NBD module and builds it) or using the already provided module:
you can compile the module by running:
if you have Centos 7, you also may use the pre-built module (for CentOS 7.4.1708 with kernel 3.10.0-693.5.2) - which is nbd.ko
Enable the module by invoking the script (the following command will either use a module in your kernel or copy the provided nbd.ko
):
Verify that you have /dev/nbd*
devices available on your vProtect node host:
Restart your vProtect Node: