Ceph RBD

General

In order to connect to Ceph RBD you need to provide the keyring and configuration files. The Ceph RBD storage provider should detect the volumes and pools in the environment and allow you to assign backup policies. vProtect uses the RBD-NBD approach to mount a remote RBD snapshot over NBD and read data.
Note:
  • vProtect needs access to the monitors specified in the Ceph configuration file.
  • When creating Ceph RBD storage provider for the OpenStack environment only the credentials specified in the storage provider form are used by OpenStack backup process - the actual technique (RBD-NBD mount or cinder in disk-attachment strategy) and node for connecting and the backup volumes depend on the OpenStack hypervisor manager settings, not in the storage provider settings

Example

Please complete the following steps to add the Ceph RBD storage provider:
  • vProtect Node supports Ceph RBD, for which you will need to install ceph libraries:
    • On vProtect Node enable the required repositories:
For vProtect node installed on RHEL7:
1
sudo subscription-manager repo --enable=rhel-7-server-rhceph-4-tools-rpms
Copied!
For vProtect node installed on RHEL8:
1
sudo subscription-manager repo --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Copied!
For vProtect node installed on CentOS7:
For vProtect node installed on CentOS8:
  • Install the rbd-nbd and ceph-common package, with all dependencies:
    1
    yum install rbd-nbd ceph-common
    Copied!
  • Go to Storage -> Infrastructure and click Add Storage Provider
  • Choose Ceph RBD as the type and select the node responsible for backup operations
  • Provide Ceph keyring file contents which is the contents of your keyring file from the Cinder host - /etc/ceph/ceph.client.admin.keyring, for example: Note: Remember, both contents need to end with the new line sign.
    1
    [client.admin]
    2
    key = AQCCQG5dGKhUFBAA9G7TTQWfFXbF1ywbqpA1Vw==
    3
    caps mds = "allow *"
    4
    caps mgr = "allow *"
    5
    caps mon = "allow *"
    6
    caps osd = "allow *"
    Copied!
  • provide Ceph configuration file contents , for example:
    1
    [global]
    2
    cluster network = 10.40.0.0/16
    3
    fsid = cc3a4e9f-d2ca-4fec-805d-2c40605723b3
    4
    mon host = ceph-mon.domain.local
    5
    mon initial members = ceph-00
    6
    osd pool default crush rule = -1
    7
    public network = 10.40.0.0/16
    8
    [client.images]
    9
    keyring = /etc/ceph/ceph.client.images.keyring
    10
    [client.volumes]
    11
    keyring = /etc/ceph/ceph.client.volumes.keyring
    12
    [client.nova]
    13
    keyring = /etc/ceph/ceph.client.nova.keyring
    Copied!
  • Click Save - now you can initiate inventory synchronization (pop-up message) to collect information about available volumes and pools
    • later you can use the Inventory Synchronization button on the right of the newly created provider on the list.
  • Your volumes will appear in theInstances section in the submenu on the left, from which you can initiate backup/restore/mount tasks or view volume backup history and its details.
Last modified 3mo ago
Copy link