Ceph安装部署文档
openstack与ceph整合安装指导文档

openstack与ceph整合安装指导文档目录1 概述 (3)2 版本配套表 (3)3 系统架构图 (3)3.1 物理结构图 (3)3.2 逻辑结构图 (4)3.3 openstack安装 (5)3.4 ceph安装 (5)3.4.1 ip规划 (5)3.4.2 安装步骤 (6)3.5 controller节点和compute节点安ceph客户端 (7)3.6 controller节点配置glance使用ceph (8)3.7 controller节点配置cinder使用ceph (10)3.8 compute节点配置nova使用ceph (12)1概述本文档描述openstack在glance、cinder、nova组件后端如何配置使用ceph来进行存储。
2版本配套表3系统架构图3.1物理结构图Ceph node1Ceph node2Ceph node3 3.2逻辑结构图3.3openstack安装使用赵子顾的自动部署,3节点部署。
3.4ceph安装3.4.1ip规划3.4.2安装步骤1.修改3台机器的主机名分别为:ceph148、ceph149、ceph1502.编辑3台机器/etc/hosts内容如下:192.168.1.148 ceph148192.168.1.149 ceph149192.168.1.150 ceph1503.将ceph.zip目录拷贝到/home/ceph目录下并且解压,生成ceph和deploy两个目录。
4.编辑/etc/yum.repos.d/ceph.repo文件内容如下:[ceph-noarch]name=Ceph noarch packagesbaseurl=file:///home/ceph/cephenabled=1gpgcheck=0[ceph-deply]name=Ceph deploy packagesbaseurl=file:///home/ceph/deployenabled=1gpgcheck=05.三个节点增加相互信任:ceph148上执行:ssh-keygenssh-copy-id ceph148ssh-copy-id ceph149ssh-copy-id ceph150ceph149上执行:ssh-keygenssh-copy-id ceph148ssh-copy-id ceph150ceph150上执行:ssh-keygenssh-copy-id ceph148ssh-copy-id ceph1496.三个节点均关闭selinux和防火墙:service iptables stopchkconfig iptables off将/etc/sysconfig/selinux中SELINUX= enforcing改为SELINUX=disabled重启机器reboot7.安装ceph,三台机器均执行如下命令:yum install ceph -y8.在ceph148上执行如下命令安装ceph-deploy:yum install ceph-deploy -y9.执行如下命令:cd /etc/cephceph-deploy new ceph148 ceph149 ceph15010.部署mon节点,执行如下命令:ceph-deploy mon create ceph148 ceph149 ceph150ceph-deploy gatherkeys ceph148 //收集密钥11.部署osd节点,执行如下命令:ceph-deploy osd prepare ceph148:/dev/sdb ceph148:/dev/sdc ceph149:/dev/sdb ceph149:/dev/sdc ceph150:/dev/sdb ceph150:/dev/sdc12.如果有需要,部署mds,执行如下命令:ceph-deploy mds create ceph148 ceph149 ceph15013.重启服务/etc/init.d/ceph -a restart14.查看ceph状态是否正常:ceph -s显示如下:cluster 4fa8cb32-fea1-4d68-a341-ebddab2f3e0fhealth HEALTH_WARN clock skew detected on mon.ceph150monmap e2: 3 mons at {ceph148=192.168.1.148:6789/0,ceph149=192.168.1.149:6789/0,ceph150=192.168.1.150:6 789/0}, election epoch 8, quorum 0,1,2 ceph148,ceph149,ceph150osdmap e41: 6 osds: 6 up, 6 inpgmap v76: 192 pgs, 3 pools, 0 bytes data, 0 objects215 MB used, 91878 MB / 92093 MB avail192 active+clean15.配置148为ntp的server,其他节点定时向148同步时间3.5controller节点和compute节点安ceph客户端(不需要,在openstack上执行ceph --version能看到版本表示ceph已经安装)1.执行如下命令rpm --import 'https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc'rpm --import 'https:///git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc'2.增加如下文件:vi /etc/yum.repos.d/ceph-extras内容如下:[ceph-extras]name=Ceph Extras Packagesbaseurl=/packages/ceph-extras/rpm/centos6/$basearchenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc[ceph-extras-noarch]name=Ceph Extras noarchbaseurl=/packages/ceph-extras/rpm/centos6/noarchenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc[ceph-extras-source]name=Ceph Extras Sourcesbaseurl=/packages/ceph-extras/rpm/centos6/SRPMSenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc 3.添加ceph库rpm -Uvh /rpms/el6/noarch/ceph-release-1-0.el6.noarch.rpm 4.添加epel库rpm -Uvh /pub/epel/6/x86_64/epel-release-6-8.noarch.rpm 5.安装cephyum update -yyum install ceph -y3.6controller节点配置glance使用ceph1.将ceph148节点/etc/ceph目录下的两个文件拷贝到controller节点和compute节点cd /etc/ceph/scp ceph.conf ceph.client.admin.keyring 192.168.1.142:/etc/ceph/scp ceph.conf ceph.client.admin.keyring 192.168.1.140:/etc/ceph/2.修改ceph.client.admin.keyring的权限chmod +r /etc/ceph/ceph.client.admin.keyring3.在ceph148上创建glance的存储池rados mkpool glance4.编辑140上glance的配置文件/etc/glance/glance-api.conf中如下配置项rbd_store_ceph_conf = /etc/ceph/ceph.confdefault_store = rbdrbd_store_user = adminrbd_store_pool = glance5.重启glance-api进程/etc/init.d/openstack-glance-api restart6.测试上传本地镜像,首先将测试镜像cirros-0.3.2-x86_64-disk.img放到140的/home/,然后执行如下上传命令:glance image-create --name "cirros-0.3.2-x86_64-10" --disk-format qcow2 --container-format bare --is-public True --progress </home/cirros-0.3.2-x86_64-disk.img显示如下:[=============================>] 100%+------------------+--------------------------------------+| Property | Value |+------------------+--------------------------------------+| checksum | 64d7c1cd2b6f60c92c14662941cb7913 || container_format | bare || created_at | 2014-09-16T08:15:46 || deleted | False || deleted_at | None || disk_format | qcow2 || id | 49a71de0-0842-4a7a-b756-edfcb0b86153 || is_public | True || min_disk | 0 || min_ram | 0 || name | cirros-0.3.2-x86_64-10 || owner | 3636a6e92daf4991beb64643bc145fab || protected | False || size | 13167616 || status | active || updated_at | 2014-09-16T08:15:51 || virtual_size | None |+------------------+--------------------------------------+7.查看上传的镜像glance image-list显示如下:+--------------------------------------+------------------------+-------------+------------------+----------+--------+| ID | Name | Disk Format | Container Format | Size | Status |+--------------------------------------+------------------------+-------------+------------------+----------+--------+| 49a71de0-0842-4a7a-b756-edfcb0b86153 | cirros-0.3.2-x86_64-10 | qcow2 | bare | 13167616 | active |+--------------------------------------+------------------------+-------------+------------------+----------+--------+8.测试网页上传镜像,在网页上传一个镜像,然后查看镜像文件glance image-list显示如下:+--------------------------------------+------------------------+-------------+------------------+----------+--------+| ID | Name | Disk Format | Container Format | Size | Status |+--------------------------------------+------------------------+-------------+------------------+----------+--------+| da28a635-2336-4603-a596-30879f4716f4 | asdadada | qcow2 | bare | 13167616 | active || 49a71de0-0842-4a7a-b756-edfcb0b86153 | cirros-0.3.2-x86_64-10 | qcow2 | bare | 13167616 | active |+--------------------------------------+------------------------+-------------+------------------+----------+--------+9.查看ceph中glance池中的对象:rbd ls glance显示如下:49a71de0-0842-4a7a-b756-edfcb0b86153da28a635-2336-4603-a596-30879f4716f43.7controller节点配置cinder使用ceph1.在ceph148上创建cinder的存储池rados mkpool cinder2.编辑140上cinder的配置文件/etc/cinder/cinder.conf中如下配置项volume_driver = cinder.volume.drivers.rbd.RBDDriverrbd_pool=cinderrbd_user=adminrbd_ceph_conf=/etc/ceph/ceph.conf3.重启/etc/init.d/openstack-cinder-volume进程/etc/init.d/openstack-cinder-volume restart4.命令行创建一个1G的磁盘cinder create --display-name dev1 1显示如下:cinderlist+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| attachments | [] || availability_zone | nova || bootable | false || created_at | 2014-09-16T08:48:50.367976 || display_description | None || display_name | dev1 || encrypted | False || id | 1d8f3416-fb15-44a9-837f-7724a9034b1e || metadata | {} || size | 1 || snapshot_id | None || source_volid | None || status | creating || volume_type | None |+---------------------+--------------------------------------+5.查看创建的磁盘状态cinder list显示如下:+--------------------------------------+----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+----------+--------------+------+-------------+----------+-------------+| 1d8f3416-fb15-44a9-837f-7724a9034b1e | creating | dev1 | 1 | None | false | |+--------------------------------------+----------+--------------+------+-------------+----------+-------------+界面创建一个2G磁盘6.查看创建的磁盘状态cinder list显示如下:+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| 1d8f3416-fb15-44a9-837f-7724a9034b1e | available | dev1 | 1 | None | false | || e53efe68-5d3b-438d-84c1-fa4c68bd9582 | available | dev2 | 2 | None | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+10.查看ceph中cinder池中的对象:rbd ls cinder显示如下:volume-1d8f3416-fb15-44a9-837f-7724a9034b1evolume-e53efe68-5d3b-438d-84c1-fa4c68bd95823.8compute节点配置nova使用ceph1.升级libvirt1.1.0,参考《qemu-libvirt更新步骤.doct》2.编译qemu-1.6.1,参考《qemu-libvirt更新步骤.doct》3.在ceph148上创建nova的存储池rados mkpool nova4.生成一个uuiduuidgen显示如下:c245e1ef-d340-4d02-9dcf-fd091cd1fe475.执行如下命令cat > secret.xml <<EOF<secret ephemeral='no' private='no'><uuid>c245e1ef-d340-4d02-9dcf-fd091cd1fe47</uuid><usage type='ceph'><name>client.cinder secret</name></usage></secret>EOFvirsh secret-define --file secret.xml显示如下:Secret c245e1ef-d340-4d02-9dcf-fd091cd1fe47 created6.执行如下命令:cat /etc/ceph/ceph.client.admin.keyring显示如下:[client.admin]key = AQAXrRdU8O7uHRAAvYit51h4Dgiz6jkAtq8GLA==7.将“AQAXrRdU8O7uHRAAvYit51h4Dgiz6jkAtq8GLA==”放到一个临时文件echo "AQAXrRdU8O7uHRAAvYit51h4Dgiz6jkAtq8GLA==" > key8.执行如下命令:virsh secret-set-value --secret c245e1ef-d340-4d02-9dcf-fd091cd1fe47 --base64 $(cat key)9.编辑142上nova的配置文件/etc/nova/nova.conf中如下配置项images_type=rbdimages_rbd_pool=novaimages_rbd_ceph_conf=/etc/ceph/ceph.confrbd_user=adminrbd_secret_uuid=c245e1ef-d340-4d02-9dcf-fd091cd1fe47cpu_mode=none7.重启/etc/init.d/openstack-nova-compute进程/etc/init.d/openstack-nova-compute restart10.界面上创建虚拟机,在142上执行如下命令查看虚拟机状态nova list显示如下:+--------------------------------------+-------+--------+------------+-------------+--------------------+| ID | Name | Status | Task State | Power State | Networks |+--------------------------------------+-------+--------+------------+-------------+--------------------+| 445e9242-628a-4178-bb10-2d4fd82d042f | adaaa | ACTIVE | - | Running | intnet=10.10.10.15 |+--------------------------------------+-------+--------+------------+-------------+--------------------+11.查看ceph中nova池中的对象:rbd ls nova显示如下:445e9242-628a-4178-bb10-2d4fd82d042f_disk4操作测试截图4.1云硬盘快照从云硬盘dev3创建云硬盘快照4.2云硬盘快照创建云硬盘4.3挂载快照创建出来的云硬盘。
二、Ceph的ceph-deploy部署

⼆、Ceph的ceph-deploy部署1、实验环境系统版本:buntu 18.04.5 LTS内核参数:4.15.0-112-genericceph版本:pacific/16.2.5主机分配:#部署服务器ceph-deploy172.168.32.101/10.0.0.101 ceph-deploy#两个ceph-mgr 管理服务器172.168.32.102/10.0.0.102 ceph-mgr01172.168.32.103/10.0.0.103 ceph-mgr02#三台服务器作为ceph 集群Mon 监视服务器,每台服务器可以和ceph 集群的cluster ⽹络通信。
172.168.32.104/10.0.0.104 ceph-mon01 ceph-mds01172.168.32.105/10.0.0.105 ceph-mon02 ceph-mds02172.168.32.106/10.0.0.106 ceph-mon03 ceph-mds03#四台服务器作为ceph 集群OSD 存储服务器,每台服务器⽀持两个⽹络,public ⽹络针对客户端访问,cluster ⽹络⽤于集群管理及数据同步,每台三块或以上的磁盘172.168.32.107/10.0.0.107 ceph-node01172.168.32.108/10.0.0.108 ceph-node02172.168.32.109/10.0.0.109 ceph-node03172.168.32.110/10.0.0.110 ceph-node04#磁盘划分#/dev/sdb /dev/sdc /dev/sdd /dev/sde #50G2、系统环境初始化1)所有节点更换为清华源cat >/etc/apt/source.list<<EOF# 默认注释了源码镜像以提⾼ apt update 速度,如有需要可⾃⾏取消注释deb https:///ubuntu/ bionic main restricted universe multiverse# deb-src https:///ubuntu/ bionic main restricted universe multiversedeb https:///ubuntu/ bionic-updates main restricted universe multiverse# deb-src https:///ubuntu/ bionic-updates main restricted universe multiversedeb https:///ubuntu/ bionic-backports main restricted universe multiverse# deb-src https:///ubuntu/ bionic-backports main restricted universe multiversedeb https:///ubuntu/ bionic-security main restricted universe multiverse# deb-src https:///ubuntu/ bionic-security main restricted universe multiverseEOF2)所有节点安装常⽤软件apt install iproute2 ntpdate tcpdump telnet traceroute nfs-kernel-server nfs-common lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute gcc openssh-server lrzsz tree openssl libssl-dev libpcre3 libp 3)所有节点的内核配置cat >/etc/sysctl.conf <<EOF# Controls source route verificationnet.ipv4.conf.default.rp_filter = 1net.ipv4.ip_nonlocal_bind = 1net.ipv4.ip_forward = 1# Do not accept source routingnet.ipv4.conf.default.accept_source_route = 0# Controls the System Request debugging functionality of the kernelkernel.sysrq = 0# Controls whether core dumps will append the PID to the core filename.# Useful for debugging multi-threadedapplications. kernel.core_uses_pid = 1# Controls the use of TCP syncookiesnet.ipv4.tcp_syncookies = 1# Disable netfilter on bridges.net.bridge.bridge-nf-call-ip6tables = 0net.bridge.bridge-nf-call-iptables = 0net.bridge.bridge-nf-call-arptables = 0# Controls the default maxmimum size of a mesage queuekernel.msgmnb = 65536# # Controls the maximum size of a message, in byteskernel.msgmax = 65536# Controls the maximum shared segment size, in byteskernel.shmmax = 68719476736# # Controls the maximum number of shared memory segments, in pageskernel.shmall = 4294967296# TCP kernel paramaternet.ipv4.tcp_mem = 786432 1048576 1572864net.ipv4.tcp_rmem = 4096 87380 4194304net.ipv4.tcp_wmem = 4096 16384 4194304 net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_sack = 1# socket buffernet.core.wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216dev_max_backlog = 262144net.core.somaxconn = 20480net.core.optmem_max = 81920# TCP connnet.ipv4.tcp_max_syn_backlog = 262144net.ipv4.tcp_syn_retries = 3net.ipv4.tcp_retries1 = 3net.ipv4.tcp_retries2 = 15# tcp conn reusenet.ipv4.tcp_timestamps = 0net.ipv4.tcp_tw_reuse = 0net.ipv4.tcp_tw_recycle = 0net.ipv4.tcp_fin_timeout = 1net.ipv4.tcp_max_tw_buckets = 20000net.ipv4.tcp_max_orphans = 3276800net.ipv4.tcp_synack_retries = 1net.ipv4.tcp_syncookies = 1# keepalive connnet.ipv4.tcp_keepalive_time = 300net.ipv4.tcp_keepalive_intvl = 30net.ipv4.tcp_keepalive_probes = 3net.ipv4.ip_local_port_range = 10001 65000# swapvm.overcommit_memory = 0vm.swappiness = 10#net.ipv4.conf.eth1.rp_filter = 0#net.ipv4.conf.lo.arp_ignore = 1#net.ipv4.conf.lo.arp_announce = 2#net.ipv4.conf.all.arp_ignore = 1#net.ipv4.conf.all.arp_announce = 2EOF4)所有节点的⽂件权限配置cat > /etc/security/limits.conf <<EOFroot soft core unlimitedroot hard core unlimitedroot soft nproc 1000000root hard nproc 1000000root soft nofile 1000000root hard nofile 1000000root soft memlock 32000root hard memlock 32000root soft msgqueue 8192000root hard msgqueue 8192000* soft core unlimited* hard core unlimited* soft nproc 1000000* hard nproc 1000000* soft nofile 1000000* hard nofile 1000000* soft memlock 32000* hard memlock 32000* soft msgqueue 8192000* hard msgqueue 8192000EOF5)所有节点的时间同步配置#安装cron并启动apt install cron -ysystemctl status cron.service#同步时间/usr/sbin/ntpdate &> /dev/null && hwclock -w#每5分钟同步⼀次时间echo "*/5 * * * * /usr/sbin/ntpdate &> /dev/null && hwclock -w" >> /var/spool/cron/crontabs/root6)所有节点/etc/hosts配置cat >>/etc/hosts<<EOF172.168.32.101 ceph-deploy172.168.32.102 ceph-mgr01172.168.32.103 ceph-mgr02172.168.32.104 ceph-mon01 ceph-mds01172.168.32.105 ceph-mon02 ceph-mds02172.168.32.106 ceph-mon03 ceph-mds03172.168.32.107 ceph-node01172.168.32.108 ceph-node02172.168.32.109 ceph-node03172.168.32.110 ceph-node04EOF7)所有节点安装python2做ceph初始化时,需要python2.7apt install python2.7 -yln -sv /usr/bin/python2.7 /usr/bin/python23、ceph部署1)所有节点配置ceph yum 仓库,并导⼊key2)所有节点创建ceph⽤户,并允许ceph ⽤户以执⾏特权命令:推荐使⽤指定的普通⽤户部署和运⾏ceph 集群,普通⽤户只要能以⾮交互⽅式执⾏命令执⾏⼀些特权命令即可,新版的ceph-deploy 可以指定包含root 的在内只要可以执⾏命令的⽤户,不过仍然推荐使⽤普通⽤户,⽐如ceph、cephuser、cephadmin 这样的⽤户去管理ceph 集群。
ceph1安装文档.

ceph安装文档[root@server231 ~]# cat /etc/hosts172.16.7.231 server231vi ceph.sh#!/bin/bashifeval "rpm -q ceph"; thenecho "ceph is install"elseecho "ceph is install"fiyum install ceph -yrm -rf /var/lib/ceph/mon/*rm -rf /var/lib/ceph/osd/*mkdir -p /var/lib/ceph/mon/ceph-server231export hosts=`ifconfig ens192 |awk NR==2 |grep 'inet'|awk '{print $2}'` echo $hostsexport a=`uuidgen`echo $a# 修改配置文件tee /etc/ceph/ceph.conf<< EOFfsid = $amon initial members = server231mon host = $hostsauth cluster required = cephxauth service required = cephxauth client required = cephxosd journal size = 1024file store xattr use omap = trueosd pool default size = 2osd pool default min size = 1osd pool default pgnum = 128osd pool default pgpnum = 128osd crush chooseleaf type = 1mon_pg_warn_max_per_osd = 1000 #(消除ceph-warn的配置)mon clock drift allowed = 2mon clock drift warn backoff = 30[osd.0]host = server231#addr = 172.16.7.xxx:6789osd data = /var/lib/ceph/osd/ceph-0keyring = /var/lib/ceph/osd/ceph-0/keyring[osd.1]host = server231#addr = 172.16.7.xxx:6789osd data = /var/lib/ceph/osd/ceph-1keyring = /var/lib/ceph/osd/ceph-1/keyring[mds.server231]host = server231EOF# 生成密钥ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' # 创建adminceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin--set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'# 加入密钥ceph.mon.keyringceph-authtool/etc/ceph/ceph.mon.keyring --import-keyring/etc/ceph/ceph.client.admin.keyring# 创建monitor mapmonmaptool --create --add server231 $hosts --fsid $a --clobber /etc/ceph/monmap# 初始化monceph-mon --mkfs -i server231 --monmap /etc/ceph/monmap --keyring/etc/ceph/ceph.mon.keyring# 创建系统启动文件touch /var/lib/ceph/mon/ceph-server231/donetouch /var/lib/ceph/mon/ceph-server231/sysvinit# 启动mon/etc/init.d/ceph start mon.server231# 创建idcephosd createmkdir -p /var/lib/ceph/osd/ceph-0#mkdir -p /var/lib/ceph/osd/ceph-1# 初始化OSDceph-osd -i 0 --mkfs --mkkey# 注册此OSDcephauth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring #节点加入CRUSH MAPcephosd crush add-bucket server231 host#节点放入default根cephosd crush move server231 root=default# 分配权重、重新编译、注入集群cephosd crush add osd.0 1.0 host=server231# 创建系统启动文件touch /var/lib/ceph/osd/ceph-0/sysvinit# 启动osd/etc/init.d/ceph start osd.0# osd1# 创建idcephosd create#echo $bmkdir -p /var/lib/ceph/osd/ceph-1# 初始化OSDceph-osd -i 1 --mkfs --mkkey# 注册此OSDcephauth add osd.1 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-1/keyring #节点加入CRUSH MAPcephosd crush add-bucket server231 host#节点放入default根cephosd crush move server231 root=default# 分配权重、重新编译、注入集群cephosd crush add osd.1 1.0 host=server231# 创建系统启动文件touch /var/lib/ceph/osd/ceph-1/sysvinit# 启动osd/etc/init.d/ceph start osd.1#创建mds目录mkdir -p /var/lib/ceph/mds/ceph-server231#注册MDScephauth get-or-create mds.server231 mds 'allow' osd 'allow rwx' mon 'allow profile mds' -o/var/lib/ceph/mds/ceph-server231/keyring#启动mds并设置系统自动启动touch /var/lib/ceph/mds/ceph-server231/sysvinit/etc/init.d/ceph start mds.server2312.ceph 挂载磁盘的方法mount -t ceph server101,server102:6789:/ /cephtest -o name=admin,secret=AQA/lH FZbI/dIxAA974lXbpOiopB3k0ilHEtAA==。
Ceph094安装手册

Ceph094.6安装-2016一、安装环境4台虚拟机,1台deploy,1台mon,2台osd,操作系统rhel71Ceph-adm 192.168.16.180Ceph-mon 192.168.16.181Ceph-osd1 192.168.16.182Ceph-osd2 192.168.16.183二、安装预环境1、配置主机名/ip地址(所有主机单独执行)hostnamectl set-hostname 主机名修改/etc/sysconfig/network-scripts/ifcfg-eno*IPADDR/NETMASK/GATEWAY2、adm节点上生成节点列表,/etc/ceph/cephlist.txt192.168.16.180192.168.16.181192.168.16.182192.168.16.1833、在adm上编辑/etc/hosts4、adm利用脚本同步所有节点的/etc/hosts[root@ceph-adm ceph]# cat /etc/ceph/rsync_hosts.shWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt);do echo -----$ip-----;rsync -avp --delete /etc/hosts $ip:/etc/;done5、所有主机生成ssh-key,并所有主机都执行id的copyssh-keygen -t rsassh-copy-id root@ceph-admssh-copy-id root@ceph-monssh-copy-id root@ceph-osd1ssh-copy-id root@ceph-osd26、adm上执行A、同步创建/etc/ceph目录[root@ceph-adm ceph]# cat mkdir_workdir.shWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt);do echo -----$ip-----;ssh root@$ip mkdir -p /etc/ceph ;doneb、同步关闭防火墙[root@ceph-adm ceph]# cat close_firewall.sh#!/bin/shset -xWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt)do echo -----$ip-----ssh root@$ip "sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config"ssh root@$ip setenforce 0ssh root@$ip "firewall-cmd --zone=public --add-port=6789/tcp --permanent"ssh root@$ip "firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent"ssh root@$ip "firewall-cmd --reload"donec、所有脚本执行系统优化,打开文件限制[root@ceph-adm ceph]# cat system_optimization.sh#!/bin/shset -xWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt)do echo -----$ip-----ssh root@$ip "sed -i 's/4096/102400/' /etc/security/limits.d/20-nproc.conf"ssh root@$ip "cat /etc/rc.local | grep "ulimit -SHn 102400" || echo "ulimit -SHn 102400" >>/etc/rc.local"doned、编辑wty_project.repo和wty_rhel7.repo文件,并同步到所有节点[root@ceph-adm ceph]# cat rsync_repo.shWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt);do echo -----$ip-----;rsync -avp --delete /etc/ceph/*.repo$ip:/etc/yum.repos.d/;donee、安装ceph以及必要的rpm包[root@ceph-adm ceph]# cat ceph_install.sh#!/bin/shset -xWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt)do echo -----$ip-----ssh root@$ip "yum install redhat-lsb -y"ssh root@$ip "yum install ceph -y"done三、deploy安装,在adm节点1、deploy安装cd /etc/ceph/yum install ceph-deploy -y2、初始化集群[root@ceph-adm ceph]# ceph-deploy new ceph-mon3、安装集群ceph软件包(与二e的那一步有点重复,但是还是有需要的包要安装例如fcgi/ceph-radosgw)Ceph-deploy Install ceph-adm ceph-mon ceph-osd1 ceph-osd24、添加初始monitor节点和收集秘钥[root@ceph-adm ceph]# ceph-deploy mon create-initial5、osd节点操作A、osd1/osd2各增加2块100G硬盘B、adm节点操作ceph-deploy disk zap ceph-osd1:sdb ceph-osd1:sdc ceph-osd2:sdb ceph-osd2:sdcceph-deploy osd create ceph-osd1:sdb ceph-osd1:sdc ceph-osd2:sdb ceph-osd2:sdcceph –sceph osd tree检测正常*disk zap操作对硬盘进行zero操作,*osd create操作合并了osd prepare/osd activate操作,但是挂载目录不能指定/var/lib/ceph/osd/ceph-X6、将所有节点加为admin,使节点可以运行ceph的所有命令Ceph-deploy admin ceph-adm ceph-mon ceph-osd1 ceph-osd2。
CEPH部署文档V1.0

安装部署文档目录目录 (2)1.硬件配置 (3)2.部署规划 (4)普通性能集群 (4)高性能集群 (4)3.安装操作系统 (6)操作场景 (6)前提条件 (6)操作步骤 (6)系统配置 (11)4.准备安装环境 (12)5.安装CEPH-普遍性能集群 (13)安装M ONITOR (13)安装OSD (15)初始化操作 (17)新增&删除M ONITOR示例 (17)新增&删除OSD示例 (18)6.安装CEPH-高性能集群 (20)安装M ONITOR (20)安装OSD (22)初始化操作 (23)7.CEPH常用命令 (25)查看状态 (25)对象操作 (26)快照操作 (27)备份数设置 (27)1.硬件配置10台服务器的配置均为:机器型号:PowerEdge R730CPU:Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz*2内存:256G内存硬盘:2个300G的sas盘,6个400G的SSD硬盘。
10台服务器的配置均为:机器型号:PowerEdge R730CPU:Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz*2内存:256G内存硬盘:2个300G的sas盘,1个400G的SSD硬盘,9个1T的SATA硬盘。
2.部署规划本方案以硬盘性能为准,分别以包含9个1T的SATA硬盘的10台服务器为一组,6个SSD硬盘的10台服务器为一组,部署两套CEPH,定义为普通性能集群和高性能集群。
普通性能集群1)300G*2(SAS)配置为RAID1作为系统盘的安装。
2)400G*1(SSD)配置为非RAID模式,以40G为单位划分为9个分区,设置gpt格式,不需格式化文件系统,作为CEPH的日志分区。
3)1T*9(SATA):每个盘配置为RAID0,作为CEPH的数据分区,设置gpt格式,不需格式化文件系统。
每台共提供9个OSD。
1. 云柜 ceph测试环境安装文档

Ceph是加州大学Santa Cruz分校的Sage Weil(DreamHost的联合创始人)专为博士论文设计的新一代自由软件分布式文件系统。
自2007年毕业之后,Sage开始全职投入到Ceph开发之中,使其能适用于生产环境。
Ceph的主要目标是设计成基于POSIX的没有单点故障的分布式文件系统,使数据能容错和无缝的复制。
2010年3 月,Linus Torvalds将Ceph client合并到内核2.6.34中。
关于Ceph的详细介绍见:Ceph:一个Linux PB 级分布式文件系统本文是在Centos6.7上对Ceph的部署的详细指南。
发现有部分网站转载本文,转载可以,但请给出原文链接,尊重劳动成果,谢谢!首先对部署环境进行说明:编号主机名IP 功能tkvm01 10.1.166.75 测试节点10.1.166.76 tkvm02 10.1.166.76 测试节点10.1.166.66 dceph66 10.1.166.66 admin10.1.166.87 dceph87 10.1.166.87 mon10.1.166.88 dceph88 10.1.166.88 osd10.1.166.89 dceph89 10.1.166.89 osd10.1.166.90 dceph90 10.1.166.90 osdCeph的文件系统作为一个目录挂载到客户端cephclient的/cephfs目录下,可以像操作普通目录一样对此目录进行操作。
1. 安装前准备(root用户)参考文档:/ceph/ceph4e2d658765876863/ceph-1/installation30105feb901f5b8988c53011/preflig ht3010988468c030111.1 在每台机添加hosts修改文件/etc/hosts(或者/etc/sysconfig/network),添加以下内容:10.201.26.121 ceph0110.201.26.122 ceph0210.201.26.123 ceph031.2 每个Ceph节点上创建一个用户# adduser ceph# passwd ceph密码统一设为:ceph1.3 在每个Ceph节点中为用户增加root 权限# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph# chmod 0440 /etc/sudoers.d/ceph1.4 关闭防火墙等服务# service iptables stop# chkconfig iptables off //关闭开机启动防火墙每台机器节点都需要修改/etc/selinux/config 文件将SELINUX=enforcing改为SELINUX=disabled2. CEPH部署设置(root用户操作)增加Ceph资料库至ceph-deploy 管理节点. 之后,安装ceph-deploy安装EPEL 软件源(单台机操作即可):# rpm -Uvh https:///pub/epel/6/x86_64/epel-release-6-8.noarch.rpm# yum update -y安装ceph依赖# rpm -Uvh /rpm-hammer/el6/noarch/ceph-release-1-0.el6.noarch.rpm# yum install ceph-deploy -y否则会出现以下问题:3. 数据节点磁盘挂载(root用户)在ceph01、ceph02、ceph03上分别挂载了一块20g大小的磁盘作为ceph的数据存储测试使用。
ceph安装配置说明

ceph安装配置说明一、环境说明:注:在配置系统环境时,需要指定各结点的机器名,关闭iptables、关闭selinux(重要)。
相关软件包:ceph-0.61.2.tar.tarlibedit0-3.0-1.20090722cvs.el6.x86_64.rpmlibedit-devel-3.0-1.20090722cvs.el6.x86_64.rpmsnappy-1.0.5-1.el6.rf.x86_64.rpmsnappy-devel-1.0.5-1.el6.rf.x86_64.rpmleveldb-1.7.0-2.el6.x86_64.rpmleveldb-devel-1.7.0-2.el6.x86_64.rpmbtrfs-progs-0.19.11.tar.bz2$src为安装包存放目录二、内核编译及配置:cp /boot/config-2.6.32-279.el6.x86_64 /usr/src/linux-2.6.34.2/.config make menuconfig #选择把ceph编译成模块和加载btrfs文件系统make all #若是多核处理器,则可以使用make -j8命令,以多线程方式加速构建内核makemodules_installmake install修改/etc/grub.conf文件,把新编译的linux-2.6.34.2版本内核做为默认启动内核。
三、Ceph安装配置:先安装相关依赖包:rpm -ivh libedit0-3.0-1.20090722cvs.el6.x86_64.rpm --forcerpm -ivh libedit-devel-3.0-1.20090722cvs.el6.x86_64.rpmrpm -ivh snappy-1.0.5-1.el6.rf.x86_64.rpmrpm -ivh snappy-devel-1.0.5-1.el6.rf.x86_64.rpmrpm -ivh leveldb-1.7.0-2.el6.x86_64.rpmrpm -ivh leveldb-devel-1.7.0-2.el6.x86_64.rpm编译安装ceph:./autogen.sh./configure --without-tcmalloc --without-libatomic-opsmakemake install配置ceph:cp $src/ceph-0.61.2/src/sample.ceph.conf /usr/local/etc/ceph/ceph.confcp $src/ceph-0.61.2/src/init-ceph /etc/init.d/cephmkdir /var/log/ceph #建立存放ceph日志目录。
ceph安装部署

ceph安装部署ceph简介不管你是想为云平台提供Ceph 对象存储和/或 Ceph 块设备,还是想部署⼀个 Ceph ⽂件系统或者把 Ceph 作为他⽤,所有 Ceph 存储集群的部署都始于部署⼀个个 Ceph 节点、⽹络和 Ceph 存储集群。
Ceph 存储集群⾄少需要⼀个 Ceph Monitor 和两个 OSD 守护进程。
⽽运⾏ Ceph ⽂件系统客户端时,则必须要有元数据服务器( Metadata Server )。
Ceph OSDs: Ceph OSD 守护进程( Ceph OSD )的功能是存储数据,处理数据的复制、恢复、回填、再均衡,并通过检查其他OSD 守护进程的⼼跳来向 Ceph Monitors 提供⼀些监控信息。
当 Ceph 存储集群设定为有2个副本时,⾄少需要2个 OSD 守护进程,集群才能达到 active+clean 状态( Ceph 默认有3个副本,但你可以调整副本数)。
Monitors: Ceph Monitor维护着展⽰集群状态的各种图表,包括监视器图、 OSD 图、归置组( PG )图、和 CRUSH 图。
Ceph 保存着发⽣在Monitors 、 OSD 和 PG上的每⼀次状态变更的历史信息(称为 epoch )。
MDSs: Ceph 元数据服务器( MDS )为 Ceph ⽂件系统存储元数据(也就是说,Ceph 块设备和 Ceph 对象存储不使⽤MDS )。
元数据服务器使得 POSIX ⽂件系统的⽤户们,可以在不对 Ceph 存储集群造成负担的前提下,执⾏诸如 ls、find 等基本命令Ceph组件:osdOSD 守护进程,⾄少两个⽤于存储数据、处理数据拷贝、恢复、回滚、均衡通过⼼跳程序向monitor提供部分监控信息⼀个ceph集群中⾄少需要两个OSD守护进程Ceph组件:mon维护集群的状态映射信息包括monitor、OSD、placement Group(PG)还维护了monitor、OSD和PG的状态改变历史信息Ceph组件:mgr(新功能)负责ceph集群管理,如pg map对外提供集群性能指标(如cpeh -s 下IO信息)具有web界⾯的监控系统(dashboard)ceph逻辑结构数据通过ceph的object存储到PG,PG在存储到osd daemon,osd对应diskobject只能对应⼀个pg⼀个raid可以对应⼀个osd⼀整块硬盘可以对应⼀个osd⼀个分区可以对应⼀个osdmonitor:奇数个 osd : ⼏⼗到上万,osd越多性能越好pg概念副本数crush规则(pg怎么找到osd acting set)⽤户及权限epoach:单调递增的版本号acting set: osd列表,第⼀个为primary osd,replicated osdup set :acting set过去的版本pg tmp:临时pg组osd状态:默认每2秒汇报⾃⼰给mon(同时监控组内osd,如300秒没有给mon汇报状态,则会把这个osd踢出pg组) up 可以提供iodown 挂掉了in 有数据out 没数据了ceph应⽤场景:通过tgt⽀持iscsi挂载公司内部⽂件共享海量⽂件,⼤流量,⾼并发需要⾼可⽤、⾼性能⽂件系统传统单服务器及NAS共享难以满⾜需求,如存储容量,⾼可⽤ceph⽣产环境推荐存储集群采⽤全万兆⽹络集群⽹络(不对外)与公共⽹络分离(使⽤不同⽹卡)mon、mds与osd分离部署在不同机器上journal推荐使⽤PCI SSD,⼀般企业级IOPS可达40万以上OSD使⽤SATA亦可根据容量规划集群⾄强E5 2620 V3或以上cpu,64GB或更⾼内存最后,集群主机分散部署,避免机柜故障(电源、⽹络)ceph安装环境由于机器较少,使⽤3台机器,充当mon与osd,⽣产环境不建议,⽣产环境⾄少3个mon独⽴主机IP⾓⾊配置ceph-0eth0:192.168.0.150(Public)eth1:172.16.1.100(Cluster)mon、osd、mgrDISK 0 15G(OS)DISK 110G(Journal)DISK 2 10G(OSD)DISK 3 10G(OSD)ceph-1eth0:192.168.0.151(Public)eth1:172.16.1.101(Cluster)mon、osd、mgrDISK 0 15G(OS)DISK 110G(Journal)DISK 2 10G(OSD)DISK 3 10G(OSD)ceph-2eth0:192.168.0.152(Public)eth1:172.16.1.102(Cluster)mon、osd、mgrDISK 0 15G(OS)DISK 110G(Journal)DISK 2 10G(OSD)DISK 3 10G(OSD)⼀、系统设置1.绑定主机名由于后续安装及配置都涉及到主机名,故此需先绑定依次在三个节点上执⾏以下命令完成hosts绑定[root@ceph-node0 ~]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.0.150 ceph-node0192.168.0.151 ceph-node1192.168.0.152 ceph-node22.ssh-keygen信任3. 每台关闭防⽕墙systemctl stop firewalld4.时间同步yum install -y ntpdate//ntpdate 5.安装epel源与ceph-deploy本步骤要在每⼀个节点上执⾏安装epel源wget -O /etc/yum.repos.d/epel.repo /repo/epel-7.repo安装ceph-deployrpm -ivh https:///ceph/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm 替换 ceph.repo 服务器sed -i 's#htt.*://#https:///ceph#g' /etc/yum.repos.d/ceph.repo 或直接复制下⽅⽂本内容替换 /etc/yum.repos.d/ceph.repo[Ceph]name=Ceph packages for $basearchbaseurl=https:///ceph/rpm-luminous/el7/$basearchenabled=1gpgcheck=1type=rpm-mdgpgkey=https:///ceph/keys/release.asc[Ceph-noarch]name=Ceph noarch packagesbaseurl=https:///ceph/rpm-luminous/el7/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=https:///ceph/keys/release.asc[ceph-source]name=Ceph source packagesbaseurl=https:///ceph/rpm-luminous/el7/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey=https:///ceph/keys/release.asc6.安装ceph使⽤ yum 安装 ceph-deploy[root@ceph-node0 ~]# yum install -y ceph-deploy创建 ceph-install ⽬录并进⼊,安装时产⽣的⽂件都将在这个⽬录[root@ceph-node0 ~]# mkdir ceph-install && cd ceph-install[root@ceph-node0 ceph-install]#⼆、准备硬盘1.Journal磁盘本步骤要在每⼀个节点上执⾏在每个节点上为Journal磁盘分区, 分别为 sdb1, sdb2, 各⾃对应本机的2个OSD, journal磁盘对应osd的⼤⼩为25%使⽤ parted 命令进⾏创建分区操作[root@ceph-node0 ~]# parted /dev/sdbmklabel gptmkpart primary xfs 0% 50%mkpart primary xfs 50% 100%q2.OSD磁盘对于OSD磁盘我们不做处理,交由ceph-deploy进⾏操作三、安装ceph1.使⽤ceph-deploy安装ceph,以下步骤只在ceph-depoly管理节点执⾏创建⼀个ceph集群,也就是Mon,三台都充当mon[root@ceph-node0 ceph-install]# ceph-deploy new ceph-node0 ceph-node1 ceph-node2在全部节点上安装ceph[root@ceph-node0 ceph-install]# ceph-deploy install ceph-node0 ceph-node1 ceph-node2或在每个节点上⼿动执⾏ yum install -y ceph创建和初始化监控节点并收集所有的秘钥[root@ceph-node0 ceph-install]# ceph-deploy mon create-initial此时可以在osd节点查看mon端⼝创建OSD存储节点[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node0 --data /dev/sdc --journal /dev/sdb1[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node0 --data /dev/sdd --journal /dev/sdb2[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node1 --data /dev/sdc --journal /dev/sdb1[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node1 --data /dev/sdd --journal /dev/sdb2[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node2 --data /dev/sdc --journal /dev/sdb1[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node2 --data /dev/sdd --journal /dev/sdb2把配置⽂件和admin 秘钥到管理节点和ceph节点[root@ceph-0 ceph-install]# ceph-deploy --overwrite-conf admin ceph-node0 ceph-node1 ceph-node2使⽤ ceph -s 命令查看集群状态[root@ceph-node0 ceph-install]# ceph -scluster:id: e103fb71-c0a9-488e-ba42-98746a55778ahealth: HEALTH_WARNno active mgrservices:mon: 3 daemons, quorum ceph-node0,ceph-node1,ceph-node2mgr: no daemons activeosd: 6 osds: 6 up, 6indata:pools: 0 pools, 0 pgsobjects: 0 objects, 0Busage: 0B used, 0B / 0B availpgs:如集群正常则显⽰ health HEALTH_OK如OSD未全部启动,则使⽤下⽅命令重启相应节点, @ 后⾯为 OSD IDsystemctl start ceph-osd@02. 部署mgrluminous 版本需要启动 mgr, 否则 ceph -s 会有 no active mgr 提⽰官⽅⽂档建议在每个 monitor 上都启动⼀个 mgr[root@ceph-node0 ceph-install]# ceph-deploy mgr create ceph-node0:ceph-node0 ceph-node1:ceph-node1 ceph-node2:ceph-node2再次查看ceph状态[root@ceph-node0 ceph-install]# ceph -scluster:id: e103fb71-c0a9-488e-ba42-98746a55778ahealth: HEALTH_OKservices:mon: 3 daemons, quorum ceph-node0,ceph-node1,ceph-node2mgr: ceph-node0(active), standbys: ceph-node1, ceph-node2osd: 6 osds: 6 up, 6 indata:pools: 0 pools, 0 pgsobjects: 0 objects, 0Busage: 6.02GiB used, 54.0GiB / 60.0GiB availpgs:3.清除操作安装过程中如遇到奇怪的错误,可以通过以下步骤清除操作从头再来[root@ceph-node0 ceph-install]# ceph-deploy purge ceph-node0 ceph-node1 ceph-node2[root@ceph-node0 ceph-install]# ceph-deploy purgedata ceph-node0 ceph-node1 ceph-node2[root@ceph-node0 ceph-install]# ceph-deploy forgetkeys四、配置1. 为何要分离⽹络性能OSD 为客户端处理数据复制,复制多份时 OSD 间的⽹络负载势必会影响到客户端和 ceph 集群的通讯,包括延时增加、产⽣性能问题;恢复和重均衡也会显著增加公共⽹延时。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Ceph安装部署文档目录一:简介 (1)二:部署环境介绍 (1)三:集群配置准备工作 (2)3.1 : 生成SSH证书,节点建立连接 (2)3.2 : 建立ip地址list, 修改host文件 (3)3.3 : 网络端口设置 (3)3.4 : 安装centos的yum源软件包=>全部节点安装 (4)3.5 : 添加时间同步定时任务 (4)3.6 : 安装yum-plugin-priorities. (8)四:安装ceph软件包 (8)4.1、安装ceph部署机,使用ceph-deploy方式安装部署 (8)4.2、安装ceph存储集群(包括ceph对象网关) (8)五:搭建ceph集群 (8)5.1 : 新建ceph集群 (9)5.2 : 添加mon节点 (9)5.3 : 添加osd节点 (9)六:安装过程出现的部分错误及解决方法 (11)七:常用命令 (11)一:简介Ceph 生态系统架构可以划分为四部分:1. Clients:客户端:数据用户2. mds:Metadata server cluster,元数据服务器:缓存和同步分布式元数据(此文档没有安装mds)3. osd:Object storage cluster,对象存储集群:将数据和元数据作为对象存储,执行其它关键职能4. mon:Cluster monitors,集群监视器:执行监视功能二:部署环境介绍三:集群配置准备工作3.1 : 生成SSH证书,节点建立连接1)所有节点修改hostnamevim /etc/sysconfig/network2)安装SSH (主节点)sudo apt-get install openssh-server3)ssh登陆证书(主节点)ssh-keygen将配置完成的证书传输到其它服务器:ssh-copy-id {其他节点用户名}@{其他节点IP} Example:ssh-copy-id root@anode24)创建并编辑~/.ssh/config 文件,添加其他Host Host {Hostname}Hostname {}User {Username}Example:Host anode1Hostname 172.16.100.35User rootHost anode2Hostname 172.16.100.36User rootHost anode3Hostname 172.16.100.37User root3.2 : 建立ip地址list, 修改host文件1)创建工作文件夹,建立ip地址list,为文件传输做准备主节点执行mkdir /workspace/cd /workspace/vim cephlist.txt 主机列表写入:anode1anode2anode32)修改host文件vim /etc/hosts追加内容如下:172.16.100.35 anode1172.16.100.36 anode2172.16.100.37 anode3将host文件传输到其它主机for ip in $(cat /workspace/cephlist.txt);do echo -----$ip-----;rsync -avp /etc/hosts $ip:/etc/;done3.3 : 网络端口设置检查网络设置,确定这些设置是永久生效的,重启之后不会改变。
(1)Network设置,所有节点执行vim /etc/sysconfig/network-scripts/ifcfg-{iface}确认ONBOOT 为YESBOOTPROTO 对于静态IP地址来说通常为NONE如果要使用IPV6协议的话,需要设置IPV6{opt} 为YES(2)防火墙设置(Iptables),所有节点执行a)端口6789:Monitor 需要通过此端口与OSD通信,因此所有Monitor节点需打开b)端口6800:7300:用于OSD通信。
每个Ceph Node上的每个OSD需要三个端口,一个用于与client和Monitor通信;一个用于与其他OSD传送数据,一个用于心跳检测。
如果一个Ceph Node上有4个OSD,打开12(=3×4)个端口。
sudo iptables -I INPUT 1 -i eth0 -p tcp -s 172.16.100.35/255.255.255.0 --dport 6789 -j ACCEPTsudo iptables -I INPUT 1 -i eth0 -p tcp -s 172.16.100.35/255.255.255.0 --dport 6800:6809 -j ACCEPT配置完成iptable以后,确保每个节点上的改变永久生效,重启以后也能保持有效。
/sbin/service iptables save(3)tty 设置, 所有节点执行sudo visudo找到 Defaults requiretty,大约在50多行,把它改成Defaults:{User} !requiretty 或者直接把原句注释掉。
确保Ceph-Deploy不会报错。
(4)SELINUX, 所有节点执行sudo setenforce 0确保集群在配置完成之前不会出错。
可以在/etc/selinux/config修改永久改变。
3.4 : 安装centos的yum源软件包=>全部节点安装(1) 复制此文档所在文件夹中的.repo文件到目录/etc/yum.repos.d/中(2) 传输yum源文件到其它节点服务器--delete 删除那些DST中SRC没有的文件for ip in $(cat /workspace/cephlist.txt);do echo -----$ip-----;rsync -avp --delete /etc/yum.repos.d $ip:/etc/;done(3) yum立即生效(所有节点执行)yum make cache3.5 : 添加时间同步定时任务(1) 安装NTP软件包,所有节点执行yum install ntp完成后,都需要配置NTP服务为自启动chkconfig ntpd onchkconfig --list ntpdntpd 0:关闭1:关闭2:启用3:启用4:启用5:启用6:关闭在配置前,先使用ntpdate手动同步下时间,免得本机与外部时间服务器时间差距太大,让ntpd不能正常同步。
# ntpdate -u (2) 配置内网时间服务器NTP-Server(172.16.100.35)NTPD服务配置核心就在/etc/ntp.conf文件,红色部分修改,其他的是默认。
# For more information about this file, see the man pages# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5). driftfile /var/lib/ntp/drift# Permit time synchronization with our time source, but do not# permit the source to query or modify the service on this system.restrict default kod nomodify notrap nopeer noqueryrestrict -6 default kod nomodify notrap nopeer noquery# Permit all access over the loopback interface. This could# be tightened as well, but to do so would effect some of# the administrative functions.restrict 127.0.0.1restrict -6 ::1# Hosts on local network are less restricted.# 允许内网其他机器同步时间restrict 172.16.100.0 mask 255.255.255.0 nomodify notrap# Use public servers from the project.# Please consider joining the pool (/join.html).# 中国这边最活跃的时间服务器: /zone/cn server perfer # 中国国家受时中心server # server # #broadcast 192.168.1.255 autokey # broadcast server#broadcastclient # broadcast client#broadcast 224.0.1.1 autokey # multicast server#multicastclient 224.0.1.1 # multicast client#manycastserver 239.255.254.254 # manycast server#manycastclient 239.255.254.254 autokey # manycast client# allow update time by the upper server# 允许上层时间服务器主动修改本机时间restrict nomodify notrap noqueryrestrict nomodify notrap noqueryrestrict nomodify notrap noquery# Undisciplined Local Clock. This is a fake driver intended for backup# and when no outside source of synchronized time is available.# 外部时间服务器不可用时,以本地时间作为时间服务server 127.127.1.0 # local clockfudge 127.127.1.0 stratum 10# Enable public key cryptography.#cryptoincludefile /etc/ntp/crypto/pw# Key file containing the keys and key identifiers used when operating # with symmetric key cryptography.keys /etc/ntp/keys# Specify the key identifiers which are trusted.#trustedkey 4 8 42# Specify the key identifier to use with the ntpdc utility.#requestkey 8# Specify the key identifier to use with the ntpq utility.#controlkey 8# Enable writing of statistics records.#statistics clockstats cryptostats loopstats peerstats使修改立即生效chkconfig ntpd onchkconfig ntpdate on(3) 与本地时间服务器同步的其他节点设置yum install ntp...chkconfig ntpd onvim /etc/ntp.conf(直接替换原来文件)使用ntpdate手动同步本地服务器时间ntpdate -u 192.168.0.13522 Dec 17:09:57 ntpdate[6439]: adjust time server 172.16.100.35 offset 0.004882 sec这里有可能出现同步失败,一般情况下原因都是本地的NTPD服务器还没有正常启动起来,一般需要几分钟时间后才能开始同步。