Known software issues and limitations
Issue ID | Product feature | Description | Workaround |
0001 | Task cancellation | Task cancellation process will not reflect in immediate task failure - task state will be changed to cancelled and only when engine checks for it's state again it will initiate cancellation operation - some platforms may require even data transfer to complete before | Allow task to cancel and fail gracefully - this will allow vProtect to clean up temporary artifacts. If you click again task will be forced to be removed from the queue and artifacts such as snapshots will be removed as a part of daily snapshot cleanup job. In general avoid forced removal of tasks. |
0002 | Storage usage statistics | Storage statistics are updated after each backup or clean old backups job - this data may not be up-to-date all the time | To have current storage usage updated you can invoke Clean Old backups job from Backup Destinations tab |
0003 | Pre/post access storage command execution | Complete command cannot be provided as a single string. Commands need to have their arguments provided as separate entries (by clicking Add command arg button). Commands are directly executed using OS-level calls so shell operators are not supported directly. | To use shell-specific operators/commands etc., execute commands with 3 command arguments |
0004 | Tasks stuck in the queue in Queued state | Tasks will be usually be executed according to the limits set on the node and only if the node is running and has available space on the staging | Verify node has available space in the staging space path - there should be a warning message in |
0005 | OpenStack backup using disk attachment | OpenStack with disk-attachment backup strategy (cinder) - 3.9.2 only supports Ceph RBD as a storage backend | N/A |
0006 | KVM stand-alone - disk formats | VMs being backed up must have virtual disks as QCOW2/RAW files or LVM volumes | N/A |
0007 | KVM stand-alone - snapshots | Snapshots on KVM hypervisors are made using libvirt (QCOW2/RAW files) or LVM snapshots and are created per-volume basis; this operation may not be atomic if multiple drives are used | Make sure data in the VM (especially file systems reside on as few disks as possible to lower the risks of the data inconsistency) or try to use pre-post remote command execution to quiesce application before a snapshot is done |
0008 | KVM standalone incremental backup on QCOW2 | Incremental backups will be performed only on running VMs. libvirt doesn't allow block commit on a power-down VM so snapshot wouldn't be removed. | Full backup will be performed instead |
0009 | Backup providers path | Backup provider's paths must be mounted and available ahead - you should provide path just to the mount point, without any protocol specification | Mount remote file systems first and make sure these are available all the time - in vProtect configuration please provide just locally available mount point |
0010 | Backups marked as Success (removed) | When the backup completes or when clean old backups job is performed vProtect marks non-present backups as removed (if any of the files that were a part of the backup is not present). This may also happen if your storage was not available temporarily. | Make sure storage and all of the files are available and run the Clean Old Backups job - this job also attempts to sync files present in the backup provider again with the database - if all files for a particular backup are found again (and all previous backups that this particular backups depends on also are present) it will have the status of Success again |
0011 | Hypervisor storage usage statistics | Restore may fail due to insufficient storage space in Hypervisor Storage used as a target because of usage information that is not up to date. Usage statistics are updated only with inventory synchronization job | Run Inventory Synchronization job again on your hypervisor (or manager) to update storage statistics and try to restore again. |
0012 | RHV - SSH transfer rate drops after some time | SSH transfer rate may drop in some environments when used intensively over a longer time. | If possible and when the network used for transfers is trusted, please use the netcat option to transfer files outside of the SSH channel |
0013 | AWS EC2 - AMIs left in the account | For AWS EC2 some instances require the original base image to be restored - this is especially true for Windows-based clients where license relates to the original disk image. If an image is not left, vProtect can only restore such guests by creating a new one from the new image (as a root device) that is available and attach data volumes. AMIs are kept as long as the particular backup is going to use them and will be removed together. | For such guests, we recommend to enable Windows (or Linux) image required option in your Hypervisor Manager details |
0014 | AWS additional costs | Notice that vProtect needs sometimes to transfer EBS volumes between AZ if it resides in a different AZ then the node - AWS charges for intra-AZ transfers | Recommended deployment is in the same AZ as the VMs that node is going to protect to limit the number of transfers |
0015 | Node tasks limits | The number of concurrent tasks are configured in the Node Configuration -> Tasks section. These limits apply to all nodes that use this particular configuration. Currently, there is no global setting to limit the number of tasks for all of the nodes in the environment. | To limit the number of tasks globally, reduce numbers in individual node configurations. |
0016 | Hypervisor-specific settings in Node Configuration | All of the configuration parameters in the Hypervisors tab in Node Configuration are applied to all nodes with this configuration - regardless of which hypervisor it is attached to. This implies that Proxmox settings such as compression will have to be the same on all hypervisors handled by nodes having the same configuration assigned will have to be the same on all of these hypervisors. | To use these settings with different values for some hypervisors you need to assign separate nodes and define separate node configurations. Finally, assign separate nodes for these hypervisors. |
0017 | Inventory synchronization - duplicated UUID | In some cases, it may happen that the same storage was previously detected with a different setup and remained in the database. | Remove unused hypervisor storage and try to invoke inventory synchronization again. |
0018 | Estimated backup size of policy | The estimated backup size of a policy is computed based only on a known backup sizes and extrapolated to the rest of the VMs in the group. This implies that estimation will use the average backup size and multiply it by the number of all VMs in the group. Even though disk sizes are known it is not always the same as the size of the backups (especially considering compression or the fact that some strategies require chains of backup deltas to be exported) | Wait a longer period of time, once more and more backups are completed this estimation will be closer to real value. |
0019 | Citrix Hypervisor/ xcp-ng - transfers | Transfer NIC is not used in incremental backups when CBT strategy is invoked - Citrix/xcp-ng may require NBD to be exposed by the master - so vProtect has to read from the address provided by the CBT mechanism in order to connect to the NBD device. Also in some cases, data can only be transferred from the master host (especially when it is powered down) | Allow network traffic between all hypervisors and corresponding nodes in the same pool, as sometimes actual transfer may occur from the master host instead of the one which hosts VM. |
0020 | RHV - SSH Transfer permissions on the hypervisor | SSH Transfer for RHV requires usually root permissions on the hypervisor in order to activate/deactivate LVM volumes for the backup | You only may try with a different backup strategy such as Disk-attachment or Disk Image Transfer |
0021 | RHV - SSH Transfer - hypervisor access | vProtect using SSH Transfer for RHV environments needs to be able to access all hypervisors in the cluster - as it may happen that the created disk is available only on a subset of them and needs to be transferred or recovered only by using this specific hypervisor | Allow network traffic and provide valid credentials to access all hypervisors in the cluster over SSH. |
0022 | Nutanix VG support | vProtect 3.9.2 only supports volumes residing on the storage containers - VGs are not supported yet | N/A |
0023 | Nutanix Prism Element/Central connectivity | vProtect is able to perform backups by using APIs provided by Prism Element only - Prism Central doesn't offer backup API | Connect to your Prism elements by specifying separate Hypervisor Managers in vProtect (not by pointing to the Prism Central) |
0024 | Nutanix backup consistency for intensively used VMs | Intensive workload on the VM may affect backup consistency when using crash-consistent backup | If you need higher consistency install Nutanix Guest Tools inside your VM and enable application-consistent snapshots in the VM details in vProtect |
0025 | iSCSI shares for RAW backups | RAW backups allow vProtect to share them over iSCSI. If backups are in other formats (such as QCOW2) these cannot currently be shared over iSCSI | Use automatic mount instead or restore a backup to the file system and mount them using external tools such as qemu-nbd for QCOW2 files |
0026 | Backup and snapshot policies assignment | Only 1 backup and 1 snapshot management policy can be assigned to the given VM | If you need a dedicated setting for a single VM, you need to create a separate policy for that VM |
0027 | 1 schedule per rule | Currently, each schedule can only be assigned to the single rule within the same policy. | If you need to execute two rules at the same time, you need to create separate schedules and assign to these rules |
0028 | Staging space | Staging space is an integral part of a node - it allows us to mix backup strategies, especially based on the export storage domain/repository approach with other methods and file system scanning for future file-level restores. It needs to be available at all times and | To save space and boost backup time (direct writes to the backup destination) you can however mount staging and your backup destination in the same directory - |
0029 | Node OS-level permissions | On the OS level vProtect requires significant permissions to be able to manipulate disks, scan for file systems, mount them, expose resources over iSCSI, operate on block devices (NBD/iSCSI/RBD) and more. These unfortunately require multiple sudo entries and SELinux is disabled at the moment. | If some features are not required at the moment - including NBD/iSCSI/NFS related - you may reduce the number of entries in You also can try to enable SELinux, but later you need to track SELinux errors and add appropriate permissions when some of the functionality is blocked |
0030 | Proxmox VE | CBT backup strategy | Qcow2 virtual machines are required to use the new Proxmox VE backup strategy - Change block tracking ( CBT ) |
0031 | Proxmox VE | CBT backup strategy | At the moment we do not support the "Dirty Bitmaps" function, therefore we require the last snapshot to be left for incremental backups. |
0032 | Storage Providers and node assignment | vProtect supports only one node assigned to Storage Provider, which means that backup of significantly big volumes from bigger storage providers, i.e. Ceph RBD etc. will require high performing node and cannot be scaled out by adding nodes | Install multiple vProtect Server+Node environments and protect the non-overlapping set of volumes with each vProtect instance. |
0033 | RHV - Restore with SPARSE disk allocation format | Restore to RHV using SPARSE disk allocation format is not supported if backup files are in RAW format and destination storage domain type in either Fibre Channel or iSCSI. If such configuration is detected, then disk allocation format is automatically switched to PREALLOCATED | You can use other backup strategies that use QCOW2 files instead of RAW (like disk image transfer). Alternatively, you can select different storage domain of a type that supports SPARSE disks with RAW files |
Last updated