Ceph 与 OpenStack系统集成指南

本文已经基于Liberty+Ceph9.2进行过测试。

 

1、创建一个POOL

Ceph的块设备默认使用“rdb” pool,但建议为Cinder和Glance创建专用的pool。

ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create backups 128
ceph osd pool create vms 128

2、配置OPENSTACK CEPH CLIENTS

环境的准备,需要事先在ceph管理节点到openstack各服务节点间建立起免密钥登录的关系,且需有使用sudo的权限。

安装ceph客户端软件包:

在glance-api节点:sudo yum install python-rbd

在nova-compute, cinder-backup and on the cinder-volume节点:sudo yum install  ceph (both the Python bindings and the client command line tools)

在OpenStack中运行glance-api, cinder-volume, nova-compute ,cinder-backup服务的主机节点,都属于Ceph的客户端。需要配置ceph.conf.

使用以下命令把ceph.conf复制到每个ceph客户端节点:

ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf 

3、设置Ceph Client Authentication

如果启用了cephx认证,那么就需要为Nova/Cinder创建一个新用户:

$ceph auth get-or-create client.cinder mon allow r osd allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images
$ ceph auth get-or-create client.glance mon allow r osd allow class-read object_prefix rbd_children, allow rwx pool=images
$ ceph auth get-or-create client.cinder-backup mon allow r osd allow class-read object_prefix rbd_children, allow rwx pool=backups

将密钥(client.cinder, client.glance, and client.cinder-backup)分发至对应的主机节点,并调整属主权限:

ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

运行nova-compute的节点需要使用cinder密钥:

ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring

libvirt同样也要使用client.cinder密钥:

Libvirt进程在从Cinder挂载一个块设备时,需要使用该密钥访问Ceph存储集群。

先要在运行nova-compute的节点上创建一个密钥的临时拷贝:

ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key

然后登录到compute节点上,将密钥增加到libvirt配置文件中并删除上面的临时文件:

$ uuidgen
22003ebb-0f32-400e-9584-fa90b6efd874

cat > secret.xml <
22003ebb-0f32-400e-9584-fa90b6efd874

client.cinder secret


EOF
# virsh secret-define --file secret.xml
#Secret 22003ebb-0f32-400e-9584-fa90b6efd874 created
# virsh secret-set-value --secret 22003ebb-0f32-400e-9584-fa90b6efd874 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Secret value set

为了便于管理,建议在有多个计算节点时,在上在的操作中使用相同的UUID。

4、设置Glance集成Ceph

JUNO,编辑/etc/glance/glance-api.conf:

[DEFAULT]
...
default_store = rbd
...
[glance_store]
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

启用copy-on-write功能:

在[DEFAULT]中增加以下参数,

show_image_direct_url = True

怎样关闭Glance的镜像缓存:

修改以下参数如以下所示,

[paste_deploy]flavor = keystone

其它建议设置的Glance参数:

- hw_scsi_model=virtio-scsi: add the virtio-scsi controller and get better performance and support for discard operation

- hw_disk_bus=scsi: connect every cinder block devices to that controller

- hw_qemu_guest_agent=yes: enable the QEMU guest agent

- os_require_quiesce=yes: send fs-freeze/thaw calls through the QEMU guest agent

OpenStack官网配置参考:

http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-image-service.html

5、设置Cinder集成Ceph

编辑/etc/cinder/cinder.conf:

[DEFAULT]
...
enabled_backends = ceph
...
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2

在启用了cephx认证时,还需要配置认证信息:

[ceph]
...
rbd_user = cinder
rbd_secret_uuid = 22003ebb-0f32-400e-9584-fa90b6efd874

注:如果你需要配置多个cinder back ends,一定要在 [DEFAULT] 部分设置glance_api_version = 2 。

下表来自OpenStack官网Liberty的配置文档:

http://docs.openstack.org/liberty/config-reference/content/ceph-rados.html

Description of Ceph storage configuration options
Configuration option = Default value Description
[DEFAULT]
rados_connect_timeout
=
-1
(IntOpt) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used.
rados_connection_interval
=
5
(IntOpt) Interval value (in seconds) between connection retries to ceph cluster.
rados_connection_retries
=
3
(IntOpt) Number of retries if connection to ceph cluster failed.
rbd_ceph_conf
=
(StrOpt) Path to the ceph configuration file
rbd_cluster_name
=
ceph
(StrOpt) The name of ceph cluster
rbd_flatten_volume_from_snapshot
=
False
(BoolOpt) Flatten volumes created from snapshots to remove dependency from volume to snapshot
rbd_max_clone_depth
=
5
(IntOpt) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning.
rbd_pool
=
rbd
(StrOpt) The RADOS pool where rbd volumes are stored
rbd_secret_uuid
=
None
(StrOpt) The libvirt uuid of the secret for the rbd_user volumes
rbd_store_chunk_size
=
4
(IntOpt) Volumes will be chunked into objects of this size (in megabytes).
rbd_user
=
None
(StrOpt) The RADOS client name for accessing rbd volumes - only set when using cephx authentication
volume_tmp_dir
=
None
(StrOpt) Directory where temporary image files are stored when the volume driver does not write them directly to the volume. Warning: this option is now deprecated, please use image_conversion_dir instead.

6、设置Cinder Backup集成Ceph

OpenStack Cinder Backup需要一个专门的进程。在你的Cinder Backup节点上,编辑/etc/cinder/cinder.conf:

backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

以下是来自OpenStack官网对Cinder Backup集成Ceph的配置说明:

To enable the Ceph backup driver, include the following option in the cinder.conf
file:

backup_driver = cinder.backup.drivers.ceph

The following configuration options are available for the Ceph backup driver.

Table 2.52. Description of Ceph backup driver configuration options
Configuration option = Default value Description
[DEFAULT]
backup_ceph_chunk_size
=
134217728
(IntOpt) The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store.
backup_ceph_conf
=
/etc/ceph/ceph.conf
(StrOpt) Ceph configuration file to use.
backup_ceph_pool
=
backups
(StrOpt) The Ceph pool where volume backups are stored.
backup_ceph_stripe_count
=
0
(IntOpt) RBD stripe count to use when creating a backup image.
backup_ceph_stripe_unit
=
0
(IntOpt) RBD stripe unit to use when creating a backup image.
backup_ceph_user
=
cinder
(StrOpt) The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None.
restore_discard_excess_bytes
=
True
(BoolOpt) If True, always discard excess bytes when restoring volumes i.e. pad with zeroes.

This example shows the default options for the Ceph backup driver.

backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user = cinder
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0

7、设置NOVA集成Ceph

为了直接基于Ceph存储启动虚机,还需要为Nova配置一个临时的存储后端。同时,建议使用RBD缓存,启用admin socket。

admin socket可以通过以下方法访问:

ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok help

在你的每个compute节点上,编辑Ceph配置文件:

[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20

调整权限:

mkdir -p /var/run/ceph/guests/ /var/log/qemu/
chown qemu:libvirt /var/run/ceph/guests /var/log/qemu/

注:以上的qemu用户和libvirt组是基于RedHat相关系统的。

以配置好后,如果虚机已经在运行,则可以重启使上面的配置生效。

JUNO

在每个compute节点上编辑/etc/nova/nova.conf文件:

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 22003ebb-0f32-400e-9584-fa90b6efd874
disk_cachemodes="network=writeback"

建议关闭nova的密钥注入功能,而是使用基于metadata服务、cloud-init实现类似功能:

在每个计算节点上,编辑/etc/nova/nova.conf:

inject_password = false
inject_key = false
inject_partition = -2

启动热迁移支持:

在[libvirt]部分增加:

live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

8、重启OpenStack服务

sudo service openstack-glance-api restart
sudo service openstack-nova-compute restart
sudo service openstack-cinder-volume restart
sudo service openstack-cinder-backup restart

在CentOS 7.1 上安装分布式存储系统 Ceph  http://www.linuxidc.com/Linux/2015-08/120990.htm

Ceph环境配置文档 PDF http://www.linuxidc.com/Linux/2013-05/85212.htm

CentOS 6.3上部署Ceph http://www.linuxidc.com/Linux/2013-05/85213.htm

Ceph的安装过程 http://www.linuxidc.com/Linux/2013-05/85210.htm

HOWTO Install Ceph On FC12, FC上安装Ceph分布式文件系统 http://www.linuxidc.com/Linux/2013-05/85209.htm

Ceph 文件系统安装 http://www.linuxidc.com/Linux/2013-05/85208.htm

CentOS 6.2 64位上安装Ceph 0.47.2 http://www.linuxidc.com/Linux/2013-05/85206.htm

Ubuntu 12.04 Ceph分布式文件系统 http://www.linuxidc.com/Linux/2013-04/82588.htm

Ubuntu 16.04快速安装Ceph集群 http://www.linuxidc.com/Linux/2016-09/135261.htm

Ceph 的详细介绍

请点这里

Ceph 的下载地址

请点这里

本文永久更新链接地址

http://www.linuxidc.com/Linux/2016-11/137095.htm

  • 版权声明: 本文源自互联网, 于4个月前,由整理发表,共 639字。
  • 原文链接:点此查看原文