禁用 requiretty
在某些发行版(如 CentOS )上,执行 ceph-deploy 命令时,如果你的 Ceph 节点默认设置了 requiretty 那就会遇到报错。可以这样禁用此功能:
sudo visudo 找到 Defaults requiretty 选项,把它改为Defaults:ceph !requiretty,这样 ceph-deploy 就能用 ceph 用户登录并使用 sudo 了。
用户无密码登录
管理节点必须能够通过 SSH 无密码地访问各 Ceph 节点。 目前管理节点已经存在密钥,无需执行ssh-keygen(创建虚拟机实例时创建)进行生成。 只需拷贝公钥到其他计算节点 ssh-copy-id inspur@Computer01 ssh-copy-id inspur@Computer02 ssh-copy-id inspur@Computer03
vim ~/.ssh/config
添加以下内容 Host Computer01
Hostname Computer01 User inspur Host Computer02
Hostname Computer02 User inspur Host Computer03
Hostname Computer03 User inspur 更改权限
chmod 600 ~/.ssh/config
关闭防火墙
# systemctl stop firewalld.service # systemctl disable firewalld.service
禁用SELinux
查看当前SELinux状态:
31
sestatus –v
如果未关闭SELinux,需要永久关闭所有节点的SELinux。
/etc/selinux/config,找到SELINUX 行修改成为:SELINUX=disabled:
Pip执行安装Ceph-deploy
sudo yum update && sudo yum install ceph-deploy
包冲突问题
--> Processing Dependency: python-distribute for package: ceph-deploy-1.5.34-0.noarch Package python-setuptools-0.9.8-4.el7.noarch is obsoleted by python2-setuptools-22.0.5-1.el7.noarch which is already installed --> Finished Dependency Resolution
Error: Package: ceph-deploy-1.5.34-0.noarch (ceph-noarch) Requires: python-distribute
Available: python-setuptools-0.9.8-4.el7.noarch (base) python-distribute = 0.9.8-4.el7
You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
包冲突
$ rpm -qa|grep setuptools
python2-setuptools-22.0.5-1.el7.noarch 卸载
利用pip安装解决 # yum install python-pip # pip install ceph-deploy
Ceph存储集群快速安装
创建集群
在管理节点上,进入刚创建的放置配置文件的目录,用 ceph-deploy 执行如下步骤。 ceph-deploy new Controller 当前目录下会生成
一个 Ceph 配置文件、一个 monitor 密钥环和一个日志文件
ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
32
修改配置 vim ceph.conf
Ceph 配置文件里的默认副本数从 3 改成 2 osd pool default size = 2
如果你有多个网卡,可以把 public network 写入 Ceph 配置文件的 [global] 段下,此处只有一个网卡,未做以下配置。
public network = {ip-address}/{netmask}
安装ceph
ceph-deploy install Controller Controller01 Controller02 Controller03
出错:[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph' yum remove ceph-release
rm /etc/yum.repos.d/ceph.repo.rpmsave
配置初始 monitor(s)
ceph-deploy mon create-initial
执行后当前文件夹下会多出密钥环:ceph.bootstrap-mds.keyring、ceph.bootstrap-osd.keyring、ceph.bootstrap-rgw.keyring、ceph.client.admin.keyring。
添加OSD
添加两个 OSD ssh Controller01
sudo mkdir /var/local/osd0
chown –R ceph:ceph /var/local/osd0/ exit
ssh Controller02
sudo mkdir /var/local/osd1
chown -R ceph:ceph /var/local/osd1/ exit
ssh Controller03
sudo mkdir /var/local/osd2
chown -R ceph:ceph /var/local/osd2/ exit
33
从管理节点执行 ceph-deploy 来准备 OSD 。
ceph-deploy osd prepare Controller01:/var/local/osd0 Controller02:/var/local/osd1 Controller03:/var/local/osd2
问题:执行以上命令时,如果未关闭所有节点的SELinux,出现执行该命令等待时间过长
[Controller01][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise f18c6ce8-3b03-4ab2-876b-aa70d53b45f3
[Controller01][WARNIN] No data was received after 300 seconds, disconnecting...
解决:永久关闭所有节点的SELinux。 vim /etc/sysconfig/selinux
设置方法:
1 #SELINUX=enforcing #注释掉 2 #SELINUXTYPE=targeted #注释掉 3 SELINUX=disabled #增加 4 :wq #保存,关闭。 5 shutdown -r now #重启
激活 OSD
ceph-deploy osd activate Controller01:/var/local/osd0 Controller02:/var/local/osd1Controller03:/var/local/osd2
用 ceph-deploy 把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring 了
ceph-deploy admin admin-node node1 Controller01Controller02
确保对 ceph.client.admin.keyring 有正确的操作权限 sudo chmod +r /etc/ceph/ceph.client.admin.keyring
检查集群的健康状况 ceph health
常见出错原因:防火墙未关闭,关闭防火墙后,重启集群后ceph集群恢复正常。
34
块存储服务
管理节点上安装与配置 基本配置
$ mysql -u root –p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS';
$ . admin-openrc
$ openstack user create --domain default --password-prompt cinder
User Password:(123456) Repeat User Password:
$ openstack role add --project service --user cinder admin $ openstack service create --name cinder \\
--description \$ openstack service create --name cinderv2 \\
--description \
创建cinder服务的API endpoints
$ openstack endpoint create --region RegionOne \\
volume public http://Controller:8776/v1/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne \\
volume internal http://Controller:8776/v1/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne \\
volume admin http://Controller:8776/v1/%\\(tenant_id\\)s
$ openstack endpoint create --region RegionOne \\
volumev2 public http://Controller:8776/v2/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne \\
volumev2 internal http://Controller:8776/v2/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne \\
volumev2 admin http://Controller:8776/v2/%\\(tenant_id\\)s
安装配置组件
# yum install openstack-cinder
35