This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Configure Eucalyptus Storage

Configure Storage

These are the types of storage available for your Eucalyptus cloud. Object storage Eucalyptus provides an AWS S3 compatible object storage service that provides users with web-based general purpose storage, designed to be scalable, reliable and inexpensive. You choose the object storage backend provider: Walrus or Ceph RGW. The Object Storage Gateway (OSG) provides access to objects via the backend provider you choose.

Block storage Eucalyptus provides an AWS EBS compatible block storage service that provides block storage for EC2 instances. Volumes can be created as needed and dynamically attached and detached to instances as required. EBS provides persistent data storage for instances: the volume, and the data on it, can exist beyond the lifetime of an instance. You choose the block storage backend provider for a deployment.

1 - Configure Block Storage

Configure Block Storage

This topic describes how to configure block storage on the Storage Controller (SC) for the backend of your choice.

The Storage Controller (SC) provides functionality similar to the Amazon Elastic Block Store (Amazon EBS). The SC can interface with various storage systems. Eucalyptus block storage (EBS) exports storage volumes that can be attached to a VM and mounted or accessed as a raw block device. EBS volumes can persist past VM termination and are commonly used to store persistent data.

Eucalyptus provides the following open source (free) backend providers for the SC:

  • Overlay, using the local file system
  • DAS-JBOD (just a bunch of disks)
  • Ceph

You must configure the SC to use one of the backend provider options.

1.1 - Use Ceph-RBD

Use Ceph-RBD

This topic describes how to configure Ceph-RBD as the block storage backend provider for the Storage Controller (SC).Prerequisites

  • Successful completion of all the install sections prior to this section.

  • The SC must be installed, registered, and running.

  • You must execute the steps below as a administrator.

  • You must have a functioning Ceph cluster.

  • Ceph user credentials with the following privileges are available to SCs and NCs (different user credentials can be used for the SCs and NCs).

  • Hypervisor support for Ceph-RBD on NCs. Node Controllers (NCs) are designed to communicate with the Ceph cluster via libvirt. This interaction requires a hypervisor that supports Ceph-RBD. See to satisfy this prerequisite. To configure Ceph-RBD block storage for the zone, run the following commands on the CLC Configure the SC to use Ceph-RBD for EBS.

    euctl ZONE.storage.blockstoragemanager=ceph-rbd

The output of the command should be similar to:

one.storage.blockstoragemanager=ceph-rbd

Verify that the property value is now ceph-rbd :

euctl ZONE.storage.blockstoragemanager

Check the SC to be sure that it has transitioned out of the BROKEN state and is in the NOTREADY , DISABLED or ENABLED state before configuring the rest of the properties for the SC. The ceph-rbd provider will assume defaults for the following properties for the SC:

euctl ZONE.storage.ceph
 
PROPERTY        one.storage.cephconfigfile  /etc/ceph/ceph.conf
DESCRIPTION     one.storage.cephconfigfile  Absolute path to Ceph configuration (ceph.conf) file. Default value is '/etc/ceph/ceph.conf'
 
PROPERTY        one.storage.cephkeyringfile /etc/ceph/ceph.client.eucalyptus.keyring
DESCRIPTION     one.storage.cephkeyringfile Absolute path to Ceph keyring (ceph.client.eucalyptus.keyring) file. Default value is '/etc/ceph/ceph.client.eucalyptus.keyring'
 
PROPERTY        one.storage.cephsnapshotpools       rbd
DESCRIPTION     one.storage.cephsnapshotpools       Ceph storage pool(s) made available to  for EBS snapshots. Use a comma separated list for configuring multiple pools. Default value is 'rbd'
 
PROPERTY        one.storage.cephuser        eucalyptus
DESCRIPTION     one.storage.cephuser        Ceph username employed by  operations. Default value is 'eucalyptus'
 
PROPERTY        one.storage.cephvolumepools rbd
DESCRIPTION     one.storage.cephvolumepools Ceph storage pool(s) made available to  for EBS volumes. Use a comma separated list for configuring multiple pools. Default value is 'rbd'

The following steps are optional if the default values do not work for your cloud: To set the Ceph username (the default value for Eucalyptus is ’eucalyptus’):

euctl ZONE.storage.cephuser=myuser

To set the absolute path to keyring file containing the key for the ’eucalyptus’ user (the default value is ‘/etc/ceph/ceph.client.eucalyptus.keyring’):

euctl ZONE.storage.cephkeyringfile='/etc/ceph/ceph.client.myuser.keyring'

To set the absolute path to ceph.conf file (default value is ‘/etc/ceph/ceph.conf’):

euctl ZONE.storage.cephconfigfile=/path/to/ceph.conf

To change the comma-delimited list of Ceph pools assigned to Eucalyptus for managing EBS volumes (default value is ‘rbd’) :

euctl ZONE.storage.cephvolumepools=rbd,myvolumes

To change the comma-delimited list of Ceph pools assigned to Eucalyptus for managing EBS snapshots (default value is ‘rbd’) :

euctl ZONE.storage.cephsnapshotpools=mysnapshots

If you want to enable snapshot deltas for your Ceph backend:

Verify that snapshots are enabled:

euctl ZONE.storage.shouldtransfersnapshots=true

Set the maximum number of deltas to be created before creating a new full snapshot:

euctl ZONE.storage.maxsnapshotdeltas=NON_ZERO_INTEGER

Every NC will assume the following defaults:

CEPH_USER_NAME="eucalyptus"
CEPH_KEYRING_PATH="/etc/ceph/ceph.client.eucalyptus.keyring"
CEPH_CONFIG_PATH="/etc/ceph/ceph.conf"

To override the above defaults, add/edit the following properties in the /etc/eucalyptus/eucalyptus.conf on the specific NC file:

CEPH_USER_NAME="ceph-username-for-use-by-this-NC"
CEPH_KEYRING_PATH="path-to-keyring-file-for-ceph-username"
CEPH_CONFIG_PATH="path-to-ceph.conf-file"

Repeat this step for every NC in the specific Eucalyptus zone. Your Ceph backend is now ready to use with Eucalyptus .

1.1.1 - Configure Hypervisor Support for Ceph-RBD

This topic describes how to configure the hypervisor for Ceph-RBD support.The following instructions will walk you through steps for verifying and or installing the required hypervisor for Ceph-RBD support. Repeat this process for every NC in the Eucalyptus zone

Verify if qemu-kvm and qemu-img are already installed.

rpm -q qemu-kvm qemu-img

Proceed to the preparing the RHEV qemu packages step if they are not installed.

Verify qemu support for the ceph-rbd driver.

qemu-img --help
qemu-img version 0.12.1, Copyright (c) 2004-2008 Fabrice Bellard
...
Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed vhdx parallels nbd blkdebug host_cdrom 
host_floppy host_device file gluster gluster gluster gluster rbd

If the eucalyptus-node service is running, terminate/stop all instances. After all instances are terminated, stop the eucalyptus-node service.

systemctl stop eucalyptus-node.service

Prepare the RHEV qemu packages:

  • If this NC is a RHEL system and the RHEV subscription to qemu packages is available, consult the RHEV package procedure to install the qemu-kvm-ev and qemu-img-ev packages. Blacklist the RHEV packages in the repository to ensure that packages from the RHEV repository are installed.

  • If this NC is a RHEL system and RHEV subscription to qemu packages is unavailable, built and maintained qemu-rhev packages may be used. These packages are available in the same yum repository as other packages. Note that using built RHEV packages voids the original RHEL support for the qemu packages.

  • If this NC is a non-RHEL (CentOS) system, -built and maintained qemu-rhev packages may be used. These packages are available in the same yum repository as other packages. If you are not using the RHEV package procedure to install the qemu-kvm-ev and qemu-img-ev packages, install Eucalyptus -built RHEV packages: qemu-kvm-ev and qemu-img-ev , which can be found in the same yum repository as other Eucalyptus packages.

    yum install qemu-kvm-ev qemu-img-ev

Start the libvirtd service.

systemctl start libvirtd.service

Verify qemu support for the ceph-rbd driver.

qemu-img --help
qemu-img version 0.12.1, Copyright (c) 2004-2008 Fabrice Bellard
...
Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed vhdx parallels nbd blkdebug host_cdrom 
host_floppy host_device file gluster gluster gluster gluster rbd

Make sure the eucalyptus-node service is started.

systemctl start eucalyptus-node.service

Your hypervisor is ready for Eucalyptus Ceph-RBD support. You are now ready to configure Ceph-RBD for Eucalyptus .

1.2 - About the BROKEN State

This topic describes the initial state of the Storage Controller (SC) after you have registered it with the Cloud Controller (CLC).The SC automatically goes to the broken state after being registered with the CLC; it will remain in that state until you explicitly configure the SC by telling it which backend storage provider to use.

You can check the state of a storage controller by running euserv-describe-services --expert and note the state and status message of the SC(s). The output for an unconfigured SC looks something like this:

SERVICE	storage        	ZONE1        	SC71           	BROKEN    	37  	http://192.168.51.71:8773/services/Storage	arn:euca:eucalyptus:ZONE1:storage:SC71/
SERVICEEVENT	6c1f7a0a-21c9-496c-bb79-23ddd5749222	arn:euca:eucalyptus:ZONE1:storage:SC71/
SERVICEEVENT	6c1f7a0a-21c9-496c-bb79-23ddd5749222	ERROR
SERVICEEVENT	6c1f7a0a-21c9-496c-bb79-23ddd5749222	Sun Nov 18 22:11:13 PST 2012
SERVICEEVENT	6c1f7a0a-21c9-496c-bb79-23ddd5749222	SC blockstorageamanger not configured. Found empty or unset manager(unset). Legal values are: das,overlay,ceph

Note the error above: SC blockstoragemanager not configured. Found empty or unset manager(unset). Legal values are: das,overlay,ceph .

This indicates that the SC is not yet configured. It can be configured by setting the ZONE.storage.blockstoragemanager property to ‘das’, ‘overlay’, or ‘ceph’.

You can verify that the configured SC block storage manager using:

euctl ZONE.storage.blockstoragemanager

to show the current value.

1.3 - Use Direct Attached Storage (JBOD)

This topic describes how to configure the DAS-JBOD as the block storage backend provider for the Storage Controller (SC).Prerequisites

  • Successful completion of all the install sections prior to this section.

  • The SC must be installed, registered, and running.

  • Direct Attached Storage requires that have enough space for locally cached snapshots.

  • You must execute the steps below as a administrator. To configure DAS-JBOD block storage for the zone, run the following commands on the CLC Configure the SC to use the Direct Attached Storage for EBS.

    euctl ZONE.storage.blockstoragemanager=das

The output of the command should be similar to:

one.storage.blockstoragemanager=das

Verify that the property value is now: ‘das’

euctl ZONE.storage.blockstoragemanager

Set the DAS device name property. The device name can be either a raw device (/dev/sdX, for example), or the name of an existing Linux LVM volume group.

euctl ZONE.storage.dasdevice=DEVICE_NAME

For example:

euctl one.storage.dasdevice=/dev/sdb

Your DAS-JBOD backend is now ready to use with Eucalyptus .

1.4 - Use the Overlay Local Filesystem

This topic describes how to configure the local filesystem as the block storage backend provider for the Storage Controller (SC).Prerequisites

  • Successful completion of all the install sections prior to this section.
  • The SC must be installed, registered, and running.
  • The local filesystem must have enough space to hold volumes and snapshots created in the cloud.
  • You must execute the steps below as a administrator. In this configuration the SC itself hosts the volume and snapshots for EBS and stores them as files on the local filesystem. It uses standard Linux iSCSI tools to serve the volumes to instances running on NCs.

To configure overlay block storage for the zone, run the following commands on the CLC Configure the SC to use the local filesystem for EBS.

euctl ZONE.storage.blockstoragemanager=overlay 

The output of the command should be similar to:

one.storage.blockstoragemanager=overlay

Verify that the property value is now: ‘overlay’

euctl ZONE.storage.blockstoragemanager

Your local filesystem (overlay) backend is now ready to use with Eucalyptus .

2 - Configure Object Storage

This topic describes how to configure object storage on the Object Storage Gateway (OSG) for the backend of your choice. The OSG passes requests to object storage providers and talks to the persistence layer (DB) to authenticate requests. You can use Walrus, MinIO, or Ceph-RGW as the object storage provider.

  • Walrus - the default backend provider. It is a single-host Eucalyptus -integrated provider which provides basic object storage functionality for the small scale. Walrus is intended for light S3 usage.

  • MinIO - a high performing scalable object storage provider. MinIO implements the S3 API which is used by the OSG, not directly by end users. Distributed MinIO provides protection against multiple node/drive failures and bit rot using erasure code.

  • Ceph-RGW - an object storage interface built on top of Librados to provide applications with a RESTful gateway to Ceph Storage Clusters. Ceph-RGW uses the Ceph Object Gateway daemon (radosgw), which is a FastCGI module for interacting with a Ceph Storage Cluster. Since it provides interfaces compatible with OpenStack Swift and Amazon S3, the Ceph Object Gateway has its own user management. Ceph Object Gateway can store data in the same Ceph Storage Cluster used to store data from Ceph Filesystem clients or Ceph Block Device clients. The S3 and Swift APIs share a common namespace, so you may write data with one API and retrieve it with the other.

You must configure the OSG to use one of the backend provider options.

Example showing unconfigured objectstorage:

# euserv-describe-services --show-headers --filter service-type=objectstorage
SERVICE  TYPE              	ZONE    	NAME                   	  STATE	
SERVICE  objectstorage      user-api-1  user-api-1.objectstorage  broken

2.1 - Use Ceph-RGW

This topic describes how to configure Ceph Rados Gateway (RGW) as the backend for the Object Storage Gateway (OSG).

Prerequisites

  • Successful completion of all the install sections prior to this section.
  • The UFS must be registered and enabled.
  • A Ceph storage cluster is available.
  • The ceph-radosgw service has been installed (on the UFS or any other host) and configured to use the Ceph storage cluster. recommends using civetweb with ceph-radosgw service. is a lightweight web server and is included in the ceph-radosgw installation. It is relatively easier to install and configure than the alternative option – a combination of Apache and Fastcgi modules.

For more information on Ceph-RGW, see the Ceph-RGW documentation.

Configure Ceph-RGW object storage

You must execute the steps below as a administrator.

Configure ceph-rgw as the storage provider using the euctl command:

euctl objectstorage.providerclient=ceph-rgw

Configure objectstorage.s3provider.s3endpoint to the ip:port of the host running the ceph-radosgw service:

euctl objectstorage.s3provider.s3endpoint=<radosgw-host-ip>:<radosgw-webserver-port>

Configure objectstorage.s3provider.s3accesskey and objectstorage.s3provider.s3secretkey with the radosgw user credentials:

euctl objectstorage.s3provider.s3accesskey=<radosgw-user-accesskey>
euctl objectstorage.s3provider.s3secretkey=<radosgw-user-secretkey>

The Ceph-RGW backend and OSG are now ready for production.

2.2 - Use MinIO Backend

This topic describes how to configure MinIO as the object storage backend provider for the Object Storage Gateway (OSG).

Prerequisites

  • The UFS must be registered and enabled.
  • Install and start MinIO

For more information on MinIO installation and configuration see the MinIO Server Documentation

To configure MinIO object storage

You must execute the steps below as a administrator.

Configure minio as the storage provider using the euctl command:

euctl objectstorage.providerclient=minio

Configure objectstorage.s3provider.s3endpoint to the ip:port of a host running the minio server:

euctl objectstorage.s3provider.s3endpoint=<minio-host-ip>:<minio-port>

Configure objectstorage.s3provider.s3accesskey and objectstorage.s3provider.s3secretkey with credentials for minio:

euctl objectstorage.s3provider.s3accesskey=<minio-accesskey>
euctl objectstorage.s3provider.s3secretkey=<minio-secretkey>

Configure the expected response code for minio:

euctl objectstorage.s3provider.s3endpointheadresponse=400

The MinIO backend and OSG are now ready for production.

2.3 - Use Walrus Backend

This topic describes how to configure Walrus as the object storage backend provider for the Object Storage Gateway (OSG).

Prerequisites

  • Successful completion of all the install sections prior to this section.
  • The UFS must be registered and enabled.

To configure Walrus object storage

You must execute the steps below as a administrator.

Configure walrus as the storage provider using the euctl command.

euctl objectstorage.providerclient=walrus

Check that the OSG is enabled.

euserv-describe-services

If the state appears as disabled or broken , check the cloud-*.log files in the /var/log/eucalyptus directory. A disabled state generally indicates that there is a problem with your network or credentials. See Log File Location and Content for more information.

The Walrus backend and OSG are now ready for production.