简介
Linux持续不断进军可扩展计算空间,特别是可扩展存储空间,Ceph 最近加入到 Linux 中令人印象深刻的文件系统备选行列,它是一个分布式文件系统,能够在维护 POSIX 兼容性的同时加入了复制和容错功能
Ceph 生态系统架构可以划分为四部分:1、Clients:客户端(数据用户)2、cmds:Metadata server cluster,元数据服务器(缓存和同步分布式元数据)3、cosd:Object storage cluster,对象存储集群(将数据和元数据作为对象存储,执行其他关键职能)4、cmon:Cluster monitors,集群监视器(执行监视功能)前期准备
准备两台Centos8虚拟机,配置IP地址和hostname,同步系统时间,关闭防火墙和seLinux,修改IP地址和hostname映射,每台虚拟机添加一块硬盘
ip | hostname |
---|---|
192.168.29.148 | controller |
192.168.29.149 | computer |
配置openstack可参考:https://blog.51cto.com/14832653/2542863
注:若已经创建openstack集群,需要先把实例,镜像和卷进行删除安装ceph源[[email protected] ~]# yum install centos-release-ceph-octopus.noarch -y
[[email protected] ~]# yum install centos-release-ceph-octopus.noarch -y
安装ceph组件
[[email protected] ~]# yum install cephadm -y
[[email protected] ~]# yum install ceph -y
computer结点安装libvirt
[[email protected] ~]# yum install libvirt -y
部署ceph集群
创建集群
[[email protected] ~]# mkdir -p /etc/ceph
[[email protected] ~]# cd /etc/ceph/
[[email protected] ceph]# cephadm boostrap --mon-ip 192.168.29.148
[[email protected] ceph]# ceph status
[[email protected] ceph]# cephadm install ceph-common
[[email protected] ceph]# ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
修改配置
[[email protected] ceph]# ceph config set mon public_network 192.168.29.0/24
添加主机
[[email protected] ceph]# ceph orch host add computer
[[email protected] ceph]# ceph orch host ls
初始化集群监控
[[email protected] ceph]# ceph orch host label add controller mon
[[email protected] ceph]# ceph orch host label add computer mon
[[email protected] ceph]# ceph orch apply mon label:mon
[[email protected] ceph]# ceph orch daemon add mon computer:192.168.29.149
创建OSD
[[email protected] ceph]# ceph orch daemon add osd controller:/dev/nvme0n2
[[email protected] ceph]# ceph orch daemon add osd computer:/dev/nvme0n3
查看集群状态
[[email protected] ceph]# ceph -s
查看集群容量
[[email protected] ceph]# ceph df
创建pool
[[email protected] ceph]# ceph osd pool create volumes 64
[[email protected] ceph]# ceph osd pool create vms 64
#设置自启动
[[email protected] ceph]# ceph osd pool application enable vms mon
[[email protected] ceph]# ceph osd pool application enable volumes mon
查看mon,osd,pool状态
[[email protected] ceph]# ceph mon stat
[[email protected] ceph]# ceph osd status
[[email protected] ceph]# ceph osd lspools
查看pool情况
[[email protected] ~]# rbd ls vms
[[email protected] ~]# rbd ls volumes
ceph集群与openstack对接
创建cinder并设置权限
[[email protected] ceph]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms'
设置密钥
[[email protected] ceph]# ceph auth get-or-create client.cinder | tee /etc/ceph/ceph.client.cinder.keyring
#传送密钥到computer
[[email protected] ~]# ceph auth get-key client.cinder > client.cinder.key
[[email protected] ~]# scp client.cinder.key computer:/root/
#修改权限
[[email protected] ceph]# chown cinder.cinder /etc/ceph/ceph.client.cinder.keyring
设置密钥
#computer生成uuid
[[email protected] ~]#uuidgen
1fad1f90-63fb-4c15-bfc3-366c6559c1fe
#创建密钥文件
[[email protected] ~]# vi secret.xml
<secret ephemeral='no' private='no'>
<uuid>1fad1f90-63fb-4c15-bfc3-366c6559c1fe </uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
#定义密钥
virsh secret-define --file secret.xml
#设置密钥
virsh secret-set-value --secret 1fad1f90-63fb-4c15-bfc3-366c6559c1fe --base64 $(cat client.cinder.key) && rm -rf client.cinder.key secret.xml
设置对接cinder模块
修改配置文件
[[email protected] ~]# vi /etc/cinder/cinder.conf
[default]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.29.148
enabled_backends = ceph
[ceph]
default_volume_type= ceph
glance_api_version = 2
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
#对应computer创建的uuid
rbd_secret_uuid = 1fad1f90-63fb-4c15-bfc3-366c6559c1fe
同步数据库
#若已经有数据库,对数据库进行删除并重新创建和同步
[[email protected] ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
重启服务
[[email protected] ~]# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
设置ceph的类型和存储类型
[[email protected] ~]# source admin-openrc
[[email protected] ~]# cinder type-create ceph
[[email protected] ~]# cinder type-key ceph set volume_backend_name=ceph
对接nova-compute模块
computer结点修改配置文件
[[email protected] ~]# vi /etc.nova/nova.conf
[libvirt]
virt_type = qemu
inject_password = true
inject_partition = -1
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 1fad1f90-63fb-4c15-bfc3-366c6559c1fe
disk_cachemodes = "network=writeback"
live_migration_flag = "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
hw_disk_discard = unmap
[[email protected] ~]# vi /etc/ceph/ceph.conf
[client]
rbd cache=true
rbd cache writethrough until flush=true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20
创建日志目录
[[email protected] ~]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/
[[email protected] ~]# chown 777 -R /var/run/ceph/guests/ /var/log/qemu/
controller下发密钥
[[email protected] ~]# cd /etc/ceph
[[email protected] ~]# scp ceph.client.cinder.keyring [email protected]:/etc/ceph
重启服务
[[email protected] ~]# systemctl stop libvirtd openstack-nova-compute
[[email protected] ~]# systemctl start libvirtd openstack-nova-compute
精彩评论