Known software issues and limitations

Known software issues and limitations

Issue IDProduct featureDescriptionWorkaround


Task cancellation

Task cancellation process will not be reflected in immediate task failure - task state will be changed to cancelled and only when the engine checks its state again will it initiate the cancellation operation - some platforms may even require data transfer to be completed first

Allow the task to cancel and fail gracefully - this will allow Storware Backup & Recovery to clean up temporary artifacts. If you click again, the task will be forced to be removed from the queue and artifacts such as snapshots will be removed as part of the daily snapshot cleanup job. In general, avoid forced removal of tasks.


Storage usage statistics

Storage statistics are updated after each backup or the clean old backups job - this data may not be up-to-date all the time

To have current storage usage updated you can invoke the Clean Old backups job from the Backup Destinations tab


Pre/post access storage command execution

Complete command cannot be provided as a single string. Commands need to have their arguments provided as separate entries (by clicking on the Add command arg button). Commands are directly executed using OS-level calls so shell operators are not supported directly.

To use shell-specific operators/commands etc., execute commands with 3 command arguments /bin/bash, -c, your command-with-all-arguments-and-shell-operators


Tasks stuck in the queue in Queued state

Tasks will usually be executed according to the limits set on the node and only if node is running and has available space on the staging

Verify that the node has available space in the staging space path - there should be a warning message in vprotect_daemon.log


OpenStack backup using disk attachment

OpenStack with disk-attachment backup strategy (cinder) - 3.9.2 only supports Ceph RBD as a storage backend



KVM stand-alone - disk formats

VMs being backed up must have virtual disks as QCOW2/RAW files or LVM volumes



KVM stand-alone - snapshots

Snapshots on KVM hypervisors are made using libvirt (QCOW2/RAW files) or LVM snapshots and are created per volume basis; this operation may not be atomic if multiple drives are used

Make sure the data is in the VM (especially that file systems reside on as few disks as possible to lower the risks of data inconsistency) or try to use pre-post remote command execution to quiesce application before snapshot is done


KVM standalone incremental backup on QCOW2

Incremental backups will be performed only on running VMs. libvirt doesn't allow blockcommit on a power-down VM so snapshot wouldn't be removed.

Full backup will be performed instead


Backup providers path

The backup provider's paths must be mounted and available ahead - you should provide the path just to the mount point, without any protocol specification

Mount remote file systems first and make sure these are available all the time - in Storware Backup & Recovery configuration provide just the locally available mount point


Backups marked as Success (removed)

When backup completes or when the clean old backups job is performed, Storware Backup & Recovery marks non-present backups as removed (if any of the files that were part of the backup are not present). This may also happen if your storage was temporarily not available.

Make sure storage and all of the files are available and run the Clean Old Backups job - this job also attempts again to sync files present in the backup provider with the database - if all files for a particular backup are found again (and all previous backups that this particular backup depends on are also present) it will again have Success status


Hypervisor storage usage statistics

Restore may fail due to insufficient storage space in the Hypervisor Storage used as a target because of usage information that is not up to date. Usage statistics are updated only with inventory synchronization job

Run the Inventory Synchronization job again on your hypervisor (or manager) to update storage statistics and try to restore again.


RHV - SSH transfer rate drops after some time

The SSH transfer rate may drop in some environments when used intensively over a longer time.

If possible, and when the network used for transfers is trusted, please use the netcat option to transfer files outside of the SSH channel


Amazon EC2 - AMIs left in the account

For Amazon EC2 some instances require the original base image to be restored - this is especially true for Windows-based clients where license relates to the original disk image. If an image is not left, vProtect can only restore such guests by creating a new one from the new image (as a root device) that is available and attach data volumes. AMIs are kept as long as the particular backup is going to use them and will be removed together.

For such guests, we recommend to enable Windows (or Linux) image required option in your Hypervisor Manager details


AWS additional costs

TNotice that vProtect needs sometimes to transfer EBS volumes between AZ if it resides in a different AZ then the node - AWS charges for intra-AZ transfers.

Recommended deployment is in the same AZ as the VMs that node is going to protect to limit the number of transfers.


Node tasks limits

The number of concurrent tasks are configured in Node Configuration -> Tasks section. These limits apply to all nodes that use this particular configuration. Currently there is no global setting to limit the number of tasks for all of the nodes in the environment.

To limit the number of tasks globally, reduce the numbers in individual node configurations.


Hypervisor-specific settings in Node Configuration

All of the configuration parameters in Hypervisors tab in Node Configuration are applied to all nodes with this configuration - regardless of which hypervisor it is attached to. This implies that Proxmox settings such as compression will have to be the same on all hypervisors handled by nodes with the same configuration assigned and will have to be the same on all of these hypervisors.

To use these settings with different values for some hypervisors you need to assign separate nodes and define separate node configurations. Ultimately, assign separate nodes for these hypervisors.


Inventory synchronization - duplicated UUID

In some cases, it may happen that the same storage was previously detected with a different setup and remained in the database.

Remove the unused hypervisor storage and try to invoke inventory synchronization again.


Estimated backup size of policy

The estimated backup size of a policy is computed based only on known backup sizes and extrapolated to the rest of the VMs in the group. This implies that estimation will use average backup size and multiply it by the number of all VMs in the group. Even though disk sizes are known it is not always the same as the size of the backups (especially considering compression or the fact that some strategies require chains of backup deltas to be exported)

Wait for a longer period of time, and once more backups are completed this estimation will be closer to the real value.


Citrix Hypervisor/ xcp-ng - transfers

Transfer NIC is not used in incremental backups when the CBT strategy is invoked - Citrix/xcp-ng may require NBD to be exposed by the master - so Storware Backup & Recovery has to read from the address provided by the CBT mechanism in order to connect to the NBD device. Also, in some cases, data can only be transferred from the master host (especially when it is powered down)

Allow network traffic between all hypervisors and corresponding nodes in the same pool, as sometimes actual transfer may occur from the master host instead of the one which hosts the VM.


RHV - SSH Transfer permissions on the hypervisor

SSH Transfer for RHV usually requires root permissions on the hypervisor in order to activate/deactivate LVM volumes for the backup

You may try with a different backup strategy such as Disk-attachment or Disk Image Transfer


RHV - SSH Transfer - hypervisor access

Storware Backup & Recovery using SSH Transfer for RHV environments needs to be able to access all hypervisors in the cluster - as it may happen that the created disk is available only on a subset of them and needs to be transferred or recovered only by using this specific hypervisor

Allow network traffic and provide valid credentials to access all hypervisors in the cluster over SSH.


Nutanix VG support

Storware Backup & Recovery 3.9.2 only supports volumes residing on the storage containers - VGs are not supported yet



Nutanix Prism Element/Central connectivity

Storware Backup & Recovery is able to perform backups by using APIs provided by Prism Element only - Prism Central doesn't offer a backup API

Connect to your Prism elements by specifying separate Hypervisor Managers in Storware Backup & Recovery (not by pointing to the Prism Central)


Nutanix backup consistency for intensively used VMs

Intensive workload on the VM may affect backup consistency when using crash consistent backup

If you need higher consistency, install Nutanix Guest Tools inside your VM and enable application consistent snapshots in the VM details in Storware Backup & Recovery


iSCSI shares for RAW backups

RAW backups allow Storware Backup & Recovery to share them over iSCSI. If backups are in other formats (such as QCOW2), these cannot currently be shared over iSCSI

Use automatic mount instead or restore a backup and mount them using external tools such as qemu-nbd for QCOW2 files


Backup and snapshot policies assignment

Only 1 backup and 1 snapshot management policy can be assigned to a given VM

If you need a dedicated setting for a single VM, you need to create a separate policy for that VM


1 schedule per rule

Currently, each schedule can only be assigned to a single rule within the same policy.

If you need to execute two rules at the same time, you need to create separate schedules and assign them to these rules


Staging space

The staging space is an integral part of a node - this allows to mix backup strategies, especially based on the export storage domain/repository approach with other methods and file system scanning for future file-level restores. It needs to be available at all times and the vprotect user needs to be able to write to all subdirectories.

To save space and boost backup time (direct writes to the backup destination) you can, however, mount staging and your PowerProtect DD in the same directory - /vprotect_data. Remember to point the backup destination path to a subdirectory of this mount point, such as /vprotect_data/backups - still on the same FS, but the paths must be different.


Node OS-level permissions

At the OS level, Storware Backup & Recovery requires significant permissions to be able to manipulate disks, scan for file systems, mount them, expose resources over iSCSI, operate on block devices (NBD/iSCSI/RBD) and more. These unfortunately require multiple sudo entries and that SELinux is disabled at the same moment.

If some features are not required at the same moment - including NBD/iSCSI/NFS related - you may reduce the number of entries in /etc/sudoers.d/01-vprotect_node

You also can try to enable SELinux, but later you need to track SELinux errors and add appropriate permissions when some of the functionality is blocked


Proxmox VE

CBT backup strategy

Qcow2 virtual machines are required to use the new Proxmox VE backup strategy - Change block tracking ( CBT )


Proxmox VE

CBT backup strategy

At the moment, we do not support the "Dirty Bitmaps" function, therefore we require the last snapshot to be left for incremental backups.


Storage Providers and node assignment

Storware Backup & Recovery supports only one node assigned to the Storage Provider, which means that backup of significantly big volumes from bigger storage providers, for example Ceph RBD etc. will require a high performing node and cannot be scaled out by adding nodes

Install multiple Storware Backup & Recovery Server+Node environments and protect the non-overlapping set of volumes with each Storware Backup & Recovery instance.


RHV - Restore with SPARSE disk allocation format

Restore to RHV using the SPARSE disk allocation format is not supported if backup files are in RAW format and the destination storage domain type is in either Fibre Channel or iSCSI. If such configuration is detected, then the disk allocation format is automatically switched to PREALLOCATED

You can use other backup strategies that use QCOW2 files instead of RAW (like disk image transfer). Alternatively, you can select a different storage domain of a type that supports SPARSE disks with RAW files


Microsoft 365 - Restore of site which has been deleted from Bin

If the site has been deleted from the Bin, only site logic can be restored. Links added in the deleted site are not restored

Restore site or subsites to recover it's logic. Next, download data (site content) manually and upload it to SharePoint Online. Begin download from second level of SharePoint protected data (list/pages/document libary)


Microsoft 365 - Restore 1:1 teams chat

Links shared in chat are not working after restore.

You can still download shared files by copying the link address and pasting it in the different web browser tab.


Microsoft 365 - Restore site from the template

Sometimes after restoring the site from the template, despite the lack of errors in the logs, the page template is not set.

This condition can be repaired by: - re-restore - manual set template


Microsoft 365 - Restore site

Sometimes after restoring the site, you may see You need permission to access this site message

This is due to the lack of a site owner. This condition can be repaired by: - re-restore - set the site owner in the admin panel


Microsoft 365 - Restore site

After restoring the site, the correct images are not visible everywhere

The reason is the change of links to images and they do not always reload correctly after restore. This can be fixed by selecting the desired image from the library again.

Last updated