Ceph部署(14.2.0 nautilus版)
# Ceph部署(14.2.0 nautilus版)
# Ceph 介绍
在过去几年中,数据存储需求急剧增长。研究表明,大型组织中的数据正以每年40%到60%的速度增 长,许多公司每年的数据都翻了一番。国际数据公司(IDC)的分析师估计,到2000年,全球共 有 54.4 exabytes 的数据。到2007年,这一数字达到 295 艾字节,到2020年,全球预计将 达到 44 zettabytes 。传统的存储系统无法管理这样的数据增长;我们需要一个像Ceph这样的系 统,它是 分布式的 , 可扩展的 ,最重要的是,在经济上是可行的。Ceph是专门为处理当今 和 未来的数据存储 需求而设计的。
1ZB=1024EB 1EB=1024PB 1PB=1024TB
# Ceph安装(14.2.0 nautilus版)
主机名称 | |||
---|---|---|---|
ceph1 | 192.168.0.20 | OSD,MGR,MON | 硬盘*3(CentOS7.6) |
ceph2 | 192.168.0.116 | OSD,MGR | 硬盘*3(CentOS7.6) |
ceph3 | 192.168.0.140 | OSD,MGR | 硬盘*3(CentOS7.6) |
# 1.安装epel源
//集群所有节点选择一个安装版本即可
$ yum install epel-release -y
rpm -ivh http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm # L版本
rpm -ivh http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm # M版本
rpm -ivh http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/ceph-release-1-1.el7.noarch.rpm # N版本 Ceph-CSI需要N版及以上版本
rpm -ivh http://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm # O版本
# 2.配置主机映射
//在所有的节点上面配置主机名称和主机映射
[root@ceph1 ~]# hostnamectl
Static hostname: ceph1
Icon name: computer-vm
Chassis: vm
Machine ID: de03a5f3bf5a4b7e9df939c9dc5b428d
Boot ID: f9e3a46fbd1a4512a1e4271efad386ec
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1160.15.2.el7.x86_64
Architecture: x86-64
[root@ceph1 ~]# cat /etc/hosts
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.0.1 ceph-0001 ceph-0001
192.168.0.20 ceph1
192.168.0.116 ceph2
192.168.0.140 ceph3
# 3.配置免密
//在控制节点上配置对所有机器的免密操作
[root@ceph1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:V/E84SuSSepxM+tFvU8r0NzAcUq9XJsBkrzDEZVtE+Y root@ceph1
The key's randomart image is:
+---[RSA 2048]----+
| .o=o*o.|
| +.OoO.|
| o * @EB|
| o B = B |
| S O * = |
| . + B + o |
| . . o . .|
| . . . o.|
| . ...|
+----[SHA256]-----+
[root@ceph1 ~]# ssh-copy-id ceph2
[root@ceph1 ~]# ssh-copy-id ceph3
# 4.管理节点安装部署工具并新建集群
//管理节点安装部署工具(以下步骤仅在管理节点执行即可)
主要安装ceph-deploy工具,用来创建集群,在使用ceph-deploy命令报错的时候,需要用到几个python的依赖包,所以需要把依赖包安装上。
[root@ceph1 ~]# yum install ceph-deploy python-setuptools python2-subprocess32 ceph-common -y
新建集群指定Mon节点并生成ceph.conf和keyring
cluster-network用于集群内部通信为可选网络 生产环境应设置为其他私有网段地址
[root@ceph1 ~]# mkdir my-cluster
[root@ceph1 ~]# cd my-cluster
[root@ceph1 ~]# ceph-deploy new ceph1 ceph2 ceph3 --cluster-network 192.168.0.0/24 --public-network 192.168.0.0/24
# 5.部署集群节点安装Ceph
//集群节点安装Ceph软件包(以下步骤仅在管理节点执行即可)
同yum -y install ceph ceph-radosgw,这条命令相当于每个集群都要执行
[root@ceph1 ~]# ceph-deploy install --no-adjust-repos ceph1 ceph2 ceph3
# 6.初始化Mon节点
//创建并初始化Mon节点,Mon进程监听在6789端口(以下步骤仅在管理节点执行即可)
[root@ceph1 ~]# ceph-deploy mon create-initial
推送配置文件和admin秘钥到其他主机
[root@ceph1 ~]# ceph-deploy admin ceph1 ceph2 ceph3
仅推送配置
[root@ceph1 ~]# ceph-deploy config push ceph1 ceph2 ceph3
# 7.在目标主机部署Mgr
//从12版本开始之后都需要部署Mgr(以下步骤仅在管理节点执行即可)
[root@ceph1 ~]# ceph-deploy mgr create ceph1 ceph2 ceph3
# 8.关闭不安全模式
//这时候查看集群的状态会报错,WRN并不是OK,是因为开启了安全模式(以下步骤仅在管理节点执行即可)
# 查看集群状态
[root@ceph1 ~]# ceph -s
# 集群warnning处理
# mon is allowing insecure global_id reclaim
[root@ceph1 ~]# ceph config set mon auth_allow_insecure_global_id_reclaim false
# Module 'restful' has failed dependency: No module named 'pecan'
[root@ceph1 ~]# pip3 install pecan werkzeug
[root@ceph1 ~]# systemctl restart ceph-mon.target
[root@ceph1 ~]# systemctl restart ceph-mgr.target
# 9.创建OSD
//创建9个OSD(以下步骤仅在管理节点执行即可)
# 列出集群节点上的所有可用磁盘
[root@ceph1 ~]# ceph-deploy disk list ceph1 ceph2 ceph3
# 擦除集群节点上用来用作OSD设备的磁盘(disk zap)
# 生产环境下可以单独指定block-db和block-wal设备以优化性能
[root@ceph1 ~]# for dev in /dev/vdb /dev/vdc /dev/vdd
> do
> ceph-deploy disk zap ceph1 $dev
> ceph-deploy osd create ceph1 --bluestore --data $dev
> ceph-deploy disk zap ceph2 $dev
> ceph-deploy osd create ceph2 --bluestore --data $dev
> ceph-deploy disk zap ceph3 $dev
> ceph-deploy osd create ceph3 --bluestore --data $dev
> done
# 10.开启Dashboard
//查看Ceph版本,nautilus版及以后需要在所有运行mgr的节点安装ceph-mgr-dashboard
[root@ceph1 ~]# ceph mgr versions
[root@ceph1 ~]# yum install -y ceph-mgr-dashboard
查看MGR模块和帮助信息
[root@ceph1 ~]# ceph mgr module --help
[root@ceph1 ~]# ceph mgr module ls
启用或禁用MGR dashboard模块
[root@ceph1 ~]# ceph mgr module enable dashboard
[root@ceph1 ~]# ceph mgr module disable dashboard
创建自签名证书或者禁用ssl
[root@ceph1 ~]# ceph dashboard create-self-signed-cert
[root@ceph1 ~]# ceph config set mgr mgr/dashboard/ssl false
设置MGR dashboard模块的ip和port(可选)
[root@ceph1 ~]# ceph config set mgr mgr/dashboard/server_addr 0.0.0.0
[root@ceph1 ~]# ceph config set mgr mgr/dashboard/server_port 8443
设置登录认证
[root@ceph1 ~]# echo "1" > password.txt
[root@ceph1 ~]# ceph dashboard ac-user-create admin -i password.txt administrator
[root@ceph1 ~]# ceph dashboard ac-user-show admin
[root@ceph1 ~]# systemctl restart ceph-mgr.target
列出MGR模块提供的服务端点
[root@ceph1 ~]# ceph mgr services
开启对象网关管理功能
创建rgw用户
[root@ceph1 ~]# radosgw-admin user create --uid=<value> --display-name=<value>
配置对象网关认证
[root@ceph1 ~]# ceph dashboard set-rgw-api-access-key $access_key
[root@ceph1 ~]# ceph dashboard set-rgw-api-secret-key $secret_key
配置对象网关主机名和端口
[root@ceph1 ~]# ceph dashboard set-rgw-api-host <value>
[root@ceph1 ~]# ceph dashboard set-rgw-api-port <int>
# 11.配置客户端
客户端上操作:
[root@client ~]# yum -y install centos-release-ceph-nautilus.noarch
[root@client ~]# yum -y install ceph-common
# 12.在ceph的服务端配置密钥:
//创建用户(以下步骤仅在管理节点执行即可)
[root@ceph1 ceph]# ceph auth get-or-create client.rbdtest mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=rbd' | tee /etc/ceph/ceph.client.rbdtest.keyring
[root@ceph1 ceph]# cat ceph.conf
[global]
fsid = 235b6c2c-91c6-4535-b5aa-9337e934ed56
public_network = 192.168.0.0/24
cluster_network = 192.168.0.0/24
mon_initial_members = ceph1, ceph2, ceph3
mon_host = 192.168.0.203,192.168.0.226,192.168.0.172
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
rbd_default_features = 1
//将密钥传输到客户端中(以下步骤仅在管理节点执行即可)
[root@ceph1 ceph]# scp ceph.client.rbdtest.keyring root@192.168.0.51:/etc/ceph/
[root@ceph1 ceph]# scp ceph.conf root@192.168.0.51:/etc/ceph/
# 13.客户端验证密钥
[root@client ceph]# ceph -s --name client.rbdtest
cluster:
id: 235b6c2c-91c6-4535-b5aa-9337e934ed56
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph3,ceph1,ceph2 (age 8m)
mgr: ceph1(active, since 8m), standbys: ceph3, ceph2
osd: 9 osds: 9 up (since 2h), 9 in (since 2h)
data:
pools: 1 pools, 64 pgs
objects: 20 objects, 14 MiB
usage: 9.1 GiB used, 171 GiB / 180 GiB avail
pgs: 64 active+clean
# 错误解决
遇到一个错误是,检查ceph的集群健康的时候出现WARN的错误,检查健康发现需要在’sata-pool’上开启应用授权(这里的 rbd 是我们手动建的 pool)
[root@ceph1 ceph]# ceph -s
cluster:
id: 235b6c2c-91c6-4535-b5aa-9337e934ed56
health: HEALTH_WARN
application not enabled on 1 pool(s)
services:
mon: 3 daemons, quorum ceph3,ceph1,ceph2 (age 2h)
mgr: ceph1(active, since 110m), standbys: ceph3, ceph2
osd: 9 osds: 9 up (since 115m), 9 in (since 115m)
data:
pools: 1 pools, 64 pgs
objects: 20 objects, 14 MiB
usage: 9.1 GiB used, 171 GiB / 180 GiB avail
pgs: 64 active+clean
[root@ceph1 ceph]# ceph health
HEALTH_WARN application not enabled on 1 pool(s)
[root@ceph1 ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool 'rbd'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
通过开启应用授权
[root@ceph1 ~]# ceph osd pool application enable rbd rbd
abled application 'rbd' on pool 'rbd'
然后再看 dashboard ,已经没有报错了,而且block 也能识别到了。 这里我们使用了rbd(块设备),pool 只能对一种类型进行 enable,另外两种类型是cephfs(文件系统),rgw(对象存储)