This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Configure the Runtime Environment

Configure the Runtime Environment

After Eucalyptus is installed and registered, perform the tasks in this section to configure the runtime environment.Now that you have installed Eucalyptus , you’re ready to begin configuring and using it.

1 - Configure Eucalyptus DNS

Eucalyptus provides a DNS service that maps service names, bucket names, and more to IP addresses. This section details how to configure the Eucalyptus DNS service.

The DNS service will automatically try to bind to port 53. If port 53 cannot be used, DNS will be disabled. Typically, other system services like dnsmasq are configured to run on port 53. To use the Eucalyptus DNS service, you must disable these services.

Configure the Domain and Subdomain

Before using the DNS service, configure the DNS subdomain name that you want Eucalyptus to handle using the steps that follow.

Log in to the CLC and enter the following:

euctl system.dns.dnsdomain=mycloud.example.com

You can configure the load balancer DNS subdomain. To do so, log in to the CLC and enter the following:

euctl services.loadbalancing.dns_subdomain=lb

Turn on IP Mapping

To enable mapping of instance IPs to DNS host names:

Enter the following command on the CLC:

euctl bootstrap.webservices.use_instance_dns=true

When this option is enabled, public and private DNS entries are created for each launched instance in Eucalyptus . This also enables virtual hosting for Walrus. Buckets created in Walrus can be accessed as hosts. For example, the bucket mybucket is accessible as mybucket.objectstorage.mycloud.example.com .

Instance IP addresses will be mapped as euca-A-B-C-D.eucalyptus.mycloud.example.com , where A-B-C-D is the IP address (or addresses) assigned to your instance.

If you want to modify the subdomain that is reported as part of the instance DNS name, enter the following command:

euctl cloud.vmstate.instance_subdomain=.custom-dns-subdomain

When this value is modified, the public and private DNS names reported for each instance will contain the specified custom DNS subdomain name, instead of the default value, which is eucalyptus . For example, if this value is set to foobar , the instance DNS names will appear as euca-A-B-C-D.foobar.mycloud.example.com .

Enable DNS Delegation

DNS delegation allows you to forward DNS traffic for the Eucalyptus subdomain to the Eucalyptus CLC host. This host acts as a name server. This allows interruption-free access to Eucalyptus cloud services in the event of a failure. The CLC host is capable of mapping cloud host names to IP addresses of the CLC and UFS / OSG host machines.

For example, if the IP address of the CLC is 192.0.2.5 , and the IP address of Walrus is 192.0.2.6 , the host compute.mycloud.example.com resolves to 192.0.2.5 and objectstorage.mycloud.example.com resolves to 192.0.2.6 .

To enable DNS delegation:

Enter the following command on the CLC:

euctl bootstrap.webservices.use_dns_delegation=true

Configure the Master DNS Server

Set up your master DNS server to delegate the Eucalyptus subdomain to the UFS host machines, which act as name servers.

The following example shows how the Linux name server bind is set up to delegate the Eucalyptus subdomain.

Open /etc/named.conf and set up the example.com zone. For example, your /etc/named.conf may look like the following:

zone "example.com" IN {
	      type master;
	      file "/etc/bind/db.example.com";
	      };

Create /etc/bind/db.example.com if it does not exist. If your master DNS is already set up for example.com , you will need to add a name server entry for UFS host machines. For example:

$ORIGIN example.com.
$TTL 604800

@ IN    SOA ns1 admin.example.com 1 604800 86400 2419200 604800
        NS  ns1
ns1     A   MASTER.DNS.SERVER_IP
ufs1    A   UFS1_IP
mycloud NS  ufs1

After this, you will be able to resolve your instances’ public DNS names such as euca-A-B-C-D.eucalyptus.mycloud.example.com .

Restart the bind nameserver service named restart . Verify your setup by pointing /etc/resolv.conf on your client to your primary DNS server and attempt to resolve compute.example.com using ping or nslookup. It should return the IP address of a UFS host machine.

Advanced DNS Options

Recursive lookups and split-horizon DNS are available in Eucalyptus .

To enable any of the DNS resolvers, set dns.enabled to true . To enable the recursive DNS resolver, set dns.recursive.enabled to true . To enable split-horizon DNS resolution for internal instance public DNS name queries, set dns.split_horizon.enabled to true .

Optional: Configure Eucalyptus DNS to Spoof AWS Endpoints

You can configure instances to use AWS region FQDNs for service endpoints by enabling DNS spoofing.

Set up a Eucalyptus cloud with Eucalyptus DNS and HTTPS endpoints. When creating CSR, make sure and add Subject Alternative Names for all the supported AWS services for the given region that’s being tested. For example:

$ openssl req -in wildcard.c-06.autoqa.qa1.eucalyptus-systems.com.csr 
						-noout -text | less X509v3 Subject Alternative Name:
     DNS:ec2.us-east-1.amazonaws.com, DNS:autoscaling.us-east-1.amazonaws.com, 
     DNS:cloudformation.us-east-1.amazonaws.com, DNS:monitoring.us-east-1.amazonaws.com, 
     DNS:elasticloadbalancing.us-east-1.amazonaws.com, DNS:s3.amazonaws.com, 
     DNS:sts.us-east-1.amazonaws.com

Set DNS spoofing:

[root@d-17 ~]#  euctl dns.spoof_regions --region euca-admin@future
dns.spoof_regions.enabled = true
dns.spoof_regions.region_name =
dns.spoof_regions.spoof_aws_default_regions = true
dns.spoof_regions.spoof_aws_regions = true

Launch an instance, and allow SSH access. SSH into the instance and install AWS CLI.

ubuntu@euca-172-31-12-59:~$ sudo apt-get install -y python-pip
ubuntu@euca-172-31-12-59:~$ sudo -H pip install --upgrade pip
ubuntu@euca-172-31-12-59:~$ sudo -H pip install --upgrade awscli

Run aws configure and set access and secret key information if not using instance profile. Confirm AWS CLI works with HTTPS Eucalyptus service endpoint:

ubuntu@euca-172-31-12-59:~$ aws --ca-bundle euca-ca-0.crt 
--endpoint-url https://ec2.c-06.autoqa.qa1.eucalyptus-systems.com/ ec2 describe-key-pairs
{
    "KeyPairs": [
        {
            "KeyName": "devops-admin",
            "KeyFingerprint": "ee:4f:93:a8:87:8d:80:8d:2c:d6:d5:60:20:a3:2d:b2"
        }
    ]
}

Test against AWS FQDN service endpoint that matches one of the SANs in the signed certificate:

ubuntu@euca-172-31-12-59:~$ aws --ca-bundle euca-ca-0.crt 
--endpoint-url https://ec2.us-east-1.amazonaws.com ec2 describe-key-pairs{
    "KeyPairs": [
        {
            "KeyName": "devops-admin",
            "KeyFingerprint": "ee:4f:93:a8:87:8d:80:8d:2c:d6:d5:60:20:a3:2d:b2"
        }
    ]
}				

2 - Create the Eucalyptus Cloud Administrator User

After your cloud is running and DNS is functional, create a user and access key for day-to-day cloud administration.

Prerequisites

  • cloud services must be installed and registered.
  • DNS must be configured.

Create a cloud admin user

Eucalyptus admin tools and Euca2ools commands need configuration from ~/.euca . If the directory does not yet exist, create it:

mkdir ~/.euca

Choose a name for the new user and create it along with an access key:

euare-usercreate -wld DOMAIN USER >~/.euca/FILE.ini

where:

  • DOMAIN must match the DNS domain for the cloud.
  • USER is the name of the new admin user.
  • FILE can be anything; we recommend a descriptive name that includes the user’s name.

This creates a file with a region name that matches that of your cloud’s DNS domain; you can edit the file to change the region name if needed.

Switch to the new admin user:

# eval `clcadmin-release-credentials`
# export AWS_DEFAULT_REGION=REGION

where:

  • REGION must match the region name from the previous step. By default, this is the same as the cloud’s DNS domain.

As long as this file exists in ~/.euca , you can use it by repeating the export command above. These euca2ools.ini configuration files are a flexible means of managing cloud regions and users.

Alternatively you can configure the default region in the global section of your Euca2ools configuration:

# cat ~/.euca/global.ini
[global]
default-region = REGION

setting the REGION to the one from the earlier step means you do not have to use export to select the region.

User impersonation

The eucalyptus account can act as other accounts for administrative purposes. To act as the admin user in the account-1 account run:

# eval `clcadmin-impersonate-user -a account-1 -u admin`

Impersonating an account allows you to view and modify resources for that account. For example, you can clean up resources in an account before deleting it.

To stop impersonating run:

clcadmin-release-credentials

Next steps

The remainder of this guide assumes you have completed the above steps.

Use these credentials after this point.

3 - Upload the Network Configuration

This topic describes how to upload the network configuration created earlier in the installation process. To upload your networking configuration:

Run the following command to upload the configuration file to the CLC (with valid Eucalyptus admin credentials):

euctl cloud.network.network_configuration=@/path/to/your/network_config_file

To review the existing network configuration run:

euctl  --dump --format=raw cloud.network.network_configuration

When you use the Ansible playbook for deployment a network configuration file is available at /etc/eucalyptus/network.yaml on the CLC.

4 - Configure Eucalyptus Storage

Configure Storage

These are the types of storage available for your Eucalyptus cloud. Object storage Eucalyptus provides an AWS S3 compatible object storage service that provides users with web-based general purpose storage, designed to be scalable, reliable and inexpensive. You choose the object storage backend provider: Walrus or Ceph RGW. The Object Storage Gateway (OSG) provides access to objects via the backend provider you choose.

Block storage Eucalyptus provides an AWS EBS compatible block storage service that provides block storage for EC2 instances. Volumes can be created as needed and dynamically attached and detached to instances as required. EBS provides persistent data storage for instances: the volume, and the data on it, can exist beyond the lifetime of an instance. You choose the block storage backend provider for a deployment.

4.1 - Configure Block Storage

Configure Block Storage

This topic describes how to configure block storage on the Storage Controller (SC) for the backend of your choice.

The Storage Controller (SC) provides functionality similar to the Amazon Elastic Block Store (Amazon EBS). The SC can interface with various storage systems. Eucalyptus block storage (EBS) exports storage volumes that can be attached to a VM and mounted or accessed as a raw block device. EBS volumes can persist past VM termination and are commonly used to store persistent data.

Eucalyptus provides the following open source (free) backend providers for the SC:

  • Overlay, using the local file system
  • DAS-JBOD (just a bunch of disks)
  • Ceph

You must configure the SC to use one of the backend provider options.

4.1.1 - Use Ceph-RBD

Use Ceph-RBD

This topic describes how to configure Ceph-RBD as the block storage backend provider for the Storage Controller (SC).Prerequisites

  • Successful completion of all the install sections prior to this section.

  • The SC must be installed, registered, and running.

  • You must execute the steps below as a administrator.

  • You must have a functioning Ceph cluster.

  • Ceph user credentials with the following privileges are available to SCs and NCs (different user credentials can be used for the SCs and NCs).

  • Hypervisor support for Ceph-RBD on NCs. Node Controllers (NCs) are designed to communicate with the Ceph cluster via libvirt. This interaction requires a hypervisor that supports Ceph-RBD. See to satisfy this prerequisite. To configure Ceph-RBD block storage for the zone, run the following commands on the CLC Configure the SC to use Ceph-RBD for EBS.

    euctl ZONE.storage.blockstoragemanager=ceph-rbd

The output of the command should be similar to:

one.storage.blockstoragemanager=ceph-rbd

Verify that the property value is now ceph-rbd :

euctl ZONE.storage.blockstoragemanager

Check the SC to be sure that it has transitioned out of the BROKEN state and is in the NOTREADY , DISABLED or ENABLED state before configuring the rest of the properties for the SC. The ceph-rbd provider will assume defaults for the following properties for the SC:

euctl ZONE.storage.ceph
 
PROPERTY        one.storage.cephconfigfile  /etc/ceph/ceph.conf
DESCRIPTION     one.storage.cephconfigfile  Absolute path to Ceph configuration (ceph.conf) file. Default value is '/etc/ceph/ceph.conf'
 
PROPERTY        one.storage.cephkeyringfile /etc/ceph/ceph.client.eucalyptus.keyring
DESCRIPTION     one.storage.cephkeyringfile Absolute path to Ceph keyring (ceph.client.eucalyptus.keyring) file. Default value is '/etc/ceph/ceph.client.eucalyptus.keyring'
 
PROPERTY        one.storage.cephsnapshotpools       rbd
DESCRIPTION     one.storage.cephsnapshotpools       Ceph storage pool(s) made available to  for EBS snapshots. Use a comma separated list for configuring multiple pools. Default value is 'rbd'
 
PROPERTY        one.storage.cephuser        eucalyptus
DESCRIPTION     one.storage.cephuser        Ceph username employed by  operations. Default value is 'eucalyptus'
 
PROPERTY        one.storage.cephvolumepools rbd
DESCRIPTION     one.storage.cephvolumepools Ceph storage pool(s) made available to  for EBS volumes. Use a comma separated list for configuring multiple pools. Default value is 'rbd'

The following steps are optional if the default values do not work for your cloud: To set the Ceph username (the default value for Eucalyptus is ’eucalyptus’):

euctl ZONE.storage.cephuser=myuser

To set the absolute path to keyring file containing the key for the ’eucalyptus’ user (the default value is ‘/etc/ceph/ceph.client.eucalyptus.keyring’):

euctl ZONE.storage.cephkeyringfile='/etc/ceph/ceph.client.myuser.keyring'

To set the absolute path to ceph.conf file (default value is ‘/etc/ceph/ceph.conf’):

euctl ZONE.storage.cephconfigfile=/path/to/ceph.conf

To change the comma-delimited list of Ceph pools assigned to Eucalyptus for managing EBS volumes (default value is ‘rbd’) :

euctl ZONE.storage.cephvolumepools=rbd,myvolumes

To change the comma-delimited list of Ceph pools assigned to Eucalyptus for managing EBS snapshots (default value is ‘rbd’) :

euctl ZONE.storage.cephsnapshotpools=mysnapshots

If you want to enable snapshot deltas for your Ceph backend:

Verify that snapshots are enabled:

euctl ZONE.storage.shouldtransfersnapshots=true

Set the maximum number of deltas to be created before creating a new full snapshot:

euctl ZONE.storage.maxsnapshotdeltas=NON_ZERO_INTEGER

Every NC will assume the following defaults:

CEPH_USER_NAME="eucalyptus"
CEPH_KEYRING_PATH="/etc/ceph/ceph.client.eucalyptus.keyring"
CEPH_CONFIG_PATH="/etc/ceph/ceph.conf"

To override the above defaults, add/edit the following properties in the /etc/eucalyptus/eucalyptus.conf on the specific NC file:

CEPH_USER_NAME="ceph-username-for-use-by-this-NC"
CEPH_KEYRING_PATH="path-to-keyring-file-for-ceph-username"
CEPH_CONFIG_PATH="path-to-ceph.conf-file"

Repeat this step for every NC in the specific Eucalyptus zone. Your Ceph backend is now ready to use with Eucalyptus .

4.1.1.1 - Configure Hypervisor Support for Ceph-RBD

This topic describes how to configure the hypervisor for Ceph-RBD support.The following instructions will walk you through steps for verifying and or installing the required hypervisor for Ceph-RBD support. Repeat this process for every NC in the Eucalyptus zone

Verify if qemu-kvm and qemu-img are already installed.

rpm -q qemu-kvm qemu-img

Proceed to the preparing the RHEV qemu packages step if they are not installed.

Verify qemu support for the ceph-rbd driver.

qemu-img --help
qemu-img version 0.12.1, Copyright (c) 2004-2008 Fabrice Bellard
...
Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed vhdx parallels nbd blkdebug host_cdrom 
host_floppy host_device file gluster gluster gluster gluster rbd

If the eucalyptus-node service is running, terminate/stop all instances. After all instances are terminated, stop the eucalyptus-node service.

systemctl stop eucalyptus-node.service

Prepare the RHEV qemu packages:

  • If this NC is a RHEL system and the RHEV subscription to qemu packages is available, consult the RHEV package procedure to install the qemu-kvm-ev and qemu-img-ev packages. Blacklist the RHEV packages in the repository to ensure that packages from the RHEV repository are installed.

  • If this NC is a RHEL system and RHEV subscription to qemu packages is unavailable, built and maintained qemu-rhev packages may be used. These packages are available in the same yum repository as other packages. Note that using built RHEV packages voids the original RHEL support for the qemu packages.

  • If this NC is a non-RHEL (CentOS) system, -built and maintained qemu-rhev packages may be used. These packages are available in the same yum repository as other packages. If you are not using the RHEV package procedure to install the qemu-kvm-ev and qemu-img-ev packages, install Eucalyptus -built RHEV packages: qemu-kvm-ev and qemu-img-ev , which can be found in the same yum repository as other Eucalyptus packages.

    yum install qemu-kvm-ev qemu-img-ev

Start the libvirtd service.

systemctl start libvirtd.service

Verify qemu support for the ceph-rbd driver.

qemu-img --help
qemu-img version 0.12.1, Copyright (c) 2004-2008 Fabrice Bellard
...
Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed vhdx parallels nbd blkdebug host_cdrom 
host_floppy host_device file gluster gluster gluster gluster rbd

Make sure the eucalyptus-node service is started.

systemctl start eucalyptus-node.service

Your hypervisor is ready for Eucalyptus Ceph-RBD support. You are now ready to configure Ceph-RBD for Eucalyptus .

4.1.2 - About the BROKEN State

This topic describes the initial state of the Storage Controller (SC) after you have registered it with the Cloud Controller (CLC).The SC automatically goes to the broken state after being registered with the CLC; it will remain in that state until you explicitly configure the SC by telling it which backend storage provider to use.

You can check the state of a storage controller by running euserv-describe-services --expert and note the state and status message of the SC(s). The output for an unconfigured SC looks something like this:

SERVICE	storage        	ZONE1        	SC71           	BROKEN    	37  	http://192.168.51.71:8773/services/Storage	arn:euca:eucalyptus:ZONE1:storage:SC71/
SERVICEEVENT	6c1f7a0a-21c9-496c-bb79-23ddd5749222	arn:euca:eucalyptus:ZONE1:storage:SC71/
SERVICEEVENT	6c1f7a0a-21c9-496c-bb79-23ddd5749222	ERROR
SERVICEEVENT	6c1f7a0a-21c9-496c-bb79-23ddd5749222	Sun Nov 18 22:11:13 PST 2012
SERVICEEVENT	6c1f7a0a-21c9-496c-bb79-23ddd5749222	SC blockstorageamanger not configured. Found empty or unset manager(unset). Legal values are: das,overlay,ceph

Note the error above: SC blockstoragemanager not configured. Found empty or unset manager(unset). Legal values are: das,overlay,ceph .

This indicates that the SC is not yet configured. It can be configured by setting the ZONE.storage.blockstoragemanager property to ‘das’, ‘overlay’, or ‘ceph’.

You can verify that the configured SC block storage manager using:

euctl ZONE.storage.blockstoragemanager

to show the current value.

4.1.3 - Use Direct Attached Storage (JBOD)

This topic describes how to configure the DAS-JBOD as the block storage backend provider for the Storage Controller (SC).Prerequisites

  • Successful completion of all the install sections prior to this section.

  • The SC must be installed, registered, and running.

  • Direct Attached Storage requires that have enough space for locally cached snapshots.

  • You must execute the steps below as a administrator. To configure DAS-JBOD block storage for the zone, run the following commands on the CLC Configure the SC to use the Direct Attached Storage for EBS.

    euctl ZONE.storage.blockstoragemanager=das

The output of the command should be similar to:

one.storage.blockstoragemanager=das

Verify that the property value is now: ‘das’

euctl ZONE.storage.blockstoragemanager

Set the DAS device name property. The device name can be either a raw device (/dev/sdX, for example), or the name of an existing Linux LVM volume group.

euctl ZONE.storage.dasdevice=DEVICE_NAME

For example:

euctl one.storage.dasdevice=/dev/sdb

Your DAS-JBOD backend is now ready to use with Eucalyptus .

4.1.4 - Use the Overlay Local Filesystem

This topic describes how to configure the local filesystem as the block storage backend provider for the Storage Controller (SC).Prerequisites

  • Successful completion of all the install sections prior to this section.
  • The SC must be installed, registered, and running.
  • The local filesystem must have enough space to hold volumes and snapshots created in the cloud.
  • You must execute the steps below as a administrator. In this configuration the SC itself hosts the volume and snapshots for EBS and stores them as files on the local filesystem. It uses standard Linux iSCSI tools to serve the volumes to instances running on NCs.

To configure overlay block storage for the zone, run the following commands on the CLC Configure the SC to use the local filesystem for EBS.

euctl ZONE.storage.blockstoragemanager=overlay 

The output of the command should be similar to:

one.storage.blockstoragemanager=overlay

Verify that the property value is now: ‘overlay’

euctl ZONE.storage.blockstoragemanager

Your local filesystem (overlay) backend is now ready to use with Eucalyptus .

4.2 - Configure Object Storage

This topic describes how to configure object storage on the Object Storage Gateway (OSG) for the backend of your choice. The OSG passes requests to object storage providers and talks to the persistence layer (DB) to authenticate requests. You can use Walrus, MinIO, or Ceph-RGW as the object storage provider.

  • Walrus - the default backend provider. It is a single-host Eucalyptus -integrated provider which provides basic object storage functionality for the small scale. Walrus is intended for light S3 usage.

  • MinIO - a high performing scalable object storage provider. MinIO implements the S3 API which is used by the OSG, not directly by end users. Distributed MinIO provides protection against multiple node/drive failures and bit rot using erasure code.

  • Ceph-RGW - an object storage interface built on top of Librados to provide applications with a RESTful gateway to Ceph Storage Clusters. Ceph-RGW uses the Ceph Object Gateway daemon (radosgw), which is a FastCGI module for interacting with a Ceph Storage Cluster. Since it provides interfaces compatible with OpenStack Swift and Amazon S3, the Ceph Object Gateway has its own user management. Ceph Object Gateway can store data in the same Ceph Storage Cluster used to store data from Ceph Filesystem clients or Ceph Block Device clients. The S3 and Swift APIs share a common namespace, so you may write data with one API and retrieve it with the other.

You must configure the OSG to use one of the backend provider options.

Example showing unconfigured objectstorage:

# euserv-describe-services --show-headers --filter service-type=objectstorage
SERVICE  TYPE              	ZONE    	NAME                   	  STATE	
SERVICE  objectstorage      user-api-1  user-api-1.objectstorage  broken

4.2.1 - Use Ceph-RGW

This topic describes how to configure Ceph Rados Gateway (RGW) as the backend for the Object Storage Gateway (OSG).

Prerequisites

  • Successful completion of all the install sections prior to this section.
  • The UFS must be registered and enabled.
  • A Ceph storage cluster is available.
  • The ceph-radosgw service has been installed (on the UFS or any other host) and configured to use the Ceph storage cluster. recommends using civetweb with ceph-radosgw service. is a lightweight web server and is included in the ceph-radosgw installation. It is relatively easier to install and configure than the alternative option – a combination of Apache and Fastcgi modules.

For more information on Ceph-RGW, see the Ceph-RGW documentation.

Configure Ceph-RGW object storage

You must execute the steps below as a administrator.

Configure ceph-rgw as the storage provider using the euctl command:

euctl objectstorage.providerclient=ceph-rgw

Configure objectstorage.s3provider.s3endpoint to the ip:port of the host running the ceph-radosgw service:

euctl objectstorage.s3provider.s3endpoint=<radosgw-host-ip>:<radosgw-webserver-port>

Configure objectstorage.s3provider.s3accesskey and objectstorage.s3provider.s3secretkey with the radosgw user credentials:

euctl objectstorage.s3provider.s3accesskey=<radosgw-user-accesskey>
euctl objectstorage.s3provider.s3secretkey=<radosgw-user-secretkey>

The Ceph-RGW backend and OSG are now ready for production.

4.2.2 - Use MinIO Backend

This topic describes how to configure MinIO as the object storage backend provider for the Object Storage Gateway (OSG).

Prerequisites

  • The UFS must be registered and enabled.
  • Install and start MinIO

For more information on MinIO installation and configuration see the MinIO Server Documentation

To configure MinIO object storage

You must execute the steps below as a administrator.

Configure minio as the storage provider using the euctl command:

euctl objectstorage.providerclient=minio

Configure objectstorage.s3provider.s3endpoint to the ip:port of a host running the minio server:

euctl objectstorage.s3provider.s3endpoint=<minio-host-ip>:<minio-port>

Configure objectstorage.s3provider.s3accesskey and objectstorage.s3provider.s3secretkey with credentials for minio:

euctl objectstorage.s3provider.s3accesskey=<minio-accesskey>
euctl objectstorage.s3provider.s3secretkey=<minio-secretkey>

Configure the expected response code for minio:

euctl objectstorage.s3provider.s3endpointheadresponse=400

The MinIO backend and OSG are now ready for production.

4.2.3 - Use Walrus Backend

This topic describes how to configure Walrus as the object storage backend provider for the Object Storage Gateway (OSG).

Prerequisites

  • Successful completion of all the install sections prior to this section.
  • The UFS must be registered and enabled.

To configure Walrus object storage

You must execute the steps below as a administrator.

Configure walrus as the storage provider using the euctl command.

euctl objectstorage.providerclient=walrus

Check that the OSG is enabled.

euserv-describe-services

If the state appears as disabled or broken , check the cloud-*.log files in the /var/log/eucalyptus directory. A disabled state generally indicates that there is a problem with your network or credentials. See Log File Location and Content for more information.

The Walrus backend and OSG are now ready for production.

5 - Install and Configure the Imaging Service

The Eucalyptus Imaging Service, introduced in Eucalyptus 4.0, makes it easier to deploy EBS images in your Eucalyptus cloud and automates many of the labor-intensive processes required for uploading data into EBS images.

The Eucalyptus Imaging Service is implemented as a system-controlled “worker” virtual machine that is monitored and controlled via Auto Scaling. Once the Imaging Service is configured, the Imaging Service VM will be started automatically upon the first request that requires it: such as an EBS volume ingress. Specifically, in this release of Eucalyptus , these are the usage scenarios for the Eucalyptus Imaging Service:

  • Importing a raw disk image as a volume: If you have a raw disk image (containing either a data partition or a full operating system with a boot record, e.g., an HVM image), you can use the Imaging Service to import this into your cloud as a volume. This is accomplished with the euca-import-volume command. If the volume was populated with a bootable disk, that volume can be snapshotted and registered as an image.
  • Importing a raw disk image as an instance: If you have a raw disk image containing a bootable operating system, you can import this disk image into as an instance: the Imaging Service automatically creates a volume, registers the image, and launches an instance from the image. This is accomplished with the euca-import-instance command, which has options for specifying the instance type and the SSH key for the instance to use.

Install and Register the Imaging Worker Image

Eucalyptus provides a command-line tool for installing and registering the Imaging Worker image. Once you have run the tool, the Imaging Worker will be ready to use.Run the following commands on the machine where you installed the eucalyptus-service-image RPM package (it will set the imaging.imaging_worker_emi property to the newly created EMI of the imaging worker):

esi-install-image --region localhost --install-default

Consider setting the imaging.imaging_worker_keyname property to an SSH keyname (previously created with the euca-create-keypair command), so that you can perform troubleshooting inside the Imaging Worker instance, if necessary:

euctl services.imaging.worker.keyname=mykey

Managing the Imaging Worker Instance

Eucalyptus automatically starts Imaging Worker instances when there are tasks for workers to perform.The cloud administrator can list the running Imaging Worker instances, if any, by running the command:

euca-describe-instances --filter tag-value=euca-internal-imaging-workers

To delete / stop the imaging worker:

esi-manage-stack -a delete imaging

To create / start the imaging worker:

esi-manage-stack -a create imaging

Consider setting the imaging.imaging_worker_instance_type property to an Instance Type with enough ephemeral disk to convert any of your paravirtual images. The Imaging Worker root filesystem takes up about 2GB, so the maximum paravirtual image that the Imaging Worker will be able to convert is the disk allocation of the Instance Type minus 2GBs.

euctl services.imaging.worker.instance_type=m3.xlarge

Troubleshooting Imaging Worker

If the Imaging Worker is configured correctly, users will be able to import data into EBS volumes with euca-import-* commands, and paravirtual EMIs will run as instances. In some cases, though, paravirtual images may fail to convert (e.g., due to intermittent network failures or a network setup that doesn’t allow the Imaging Worker to communicate with the CLC), leaving the images in a special state. To troubleshoot:If the Imaging Worker Instance Type does not provide sufficient disk space for converting all paravirtual images, the administrator may have to change the Instance Type used by the Imaging Worker. After changing the instance type, the Imaging Worker instance should be restarted by terminating the old Imaging Worker instance:

euctl services.imaging.worker.instance_type=m2.2xlarge
euca-terminate-instances $(euca-describe-instances --filter tag-value=euca-internal-imaging-workers | grep INSTANCE | cut -f 2)

If the status of the conversion operation is ‘Image conversion failed’, but the image is marked as ‘available’ (in the output of euca-describe-images), the conversion can be retried by running the EMI again:

euca-run-instances ...

6 - Configure the Load Balancer

Install and Register the Load Balancer Image

Eucalyptus provides a tool for installing and registering the Load Balancer image. Once you have run the tool, your Load Balancer will be ready to use.

Run the following commands on the machine where you installed the eucalyptus-service-image RPM package (it will set the imaging.imaging_worker_emi property to the newly created EMI of the imaging worker):

esi-install-image --install-default

Verify Load Balancer Configuration

If you would like to verify that Load Balancer support is enabled you can list installed Load Balancers. The currently active Load Balancer will be listed as enabled. If no Load Balancers are listed, or none are marked as enabled, then your Load Balancer support has not been configured properly.Run the following command to list installed Load Balancer images:

esi-describe-images

This will produce output similar to the followin:

SERVICE     VERSION  ACTIVE     IMAGE      INSTANCES
    imaging       2.2      *     emi-573925e5      0
 loadbalancing    2.2      *     emi-573925e5      0
    database      2.2      *     emi-573925e5      0

You can also check the enabled Load Balancer EMI with:

euctl services.loadbalancing.worker.image

If you need to manually set the enabled Load Balancer EMI use:

euctl services.loadbalancing.worker.image=emi-12345678

7 - Configure Node Controller

On some Linux installations, a sufficiently large amount of local disk activity can slow down process scheduling. This can cause other operations (e.g., network communication and instance provisioning) appear to stall. Examples of disk-intensive operations include preparing disk images for launch and creating ephemeral storage.

  1. Log in to an NC server and open the /etc/eucalyptus/eucalyptus.conf file.
  2. Change the CONCURRENT_DISK_OPS parameter to the number of disk-intensive operations you want the NC to perform at once.
    1. Set CONCURRENT_DISK_OPS to 1 to serialize all disk-intensive operations. Or …
    2. Set it to a higher number to increase the amount of disk-intensive operations the NC will perform in parallel.