返回 首页

Kolla OpenStack Ceph 超融合部署


视频: http://b23.tv/lGqU9OJ

环境准备

主机名 IP 内存 CPU
hci-cloud01 10.10.10.60,20.20.20.60,30.30.30.60 32G 12核
hci-cloud02 10.10.10.61,20.20.20.61,30.30.30.61 32G 12核
hci-cloud03 10.10.10.62,20.20.20.62,30.30.30.62 32G 12核
每台机器磁盘最少需要3块裸盘做ceph

环境需求

链接:https://pan.baidu.com/s/1IDy5WLq1JHhVoMXPU55jjg?pwd=6666 
提取码:6666

关闭防火墙与selinux

sed -i "s/SELINUX=enforcing/SELINUX=edisabled/g" /etc/selinux/config; systemctl stop firewalld; systemctl disable firewalld; setenforce 0

初始化

ops-director-kolla-new.tar.gz

tar-tool.rpm.xz
xz -d tar-tool.rpm.xz

rpm -ivh tar-tool.rpm
tar xvf ops-director-kolla-new.tar.gz
ssh-keygen
一直回车即可

for i in 0 1 2;do ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]$i;done
yes
主机密码
cd ops-director-kolla



vi director-inventory.ini
1 [all:vars]
3 ansible_ssh_pass=000000

[deployment]
6 10.10.10.60

[control]
9 10.10.10.60 hostname=hci-cloud01
10 10.10.10.61 hostname=hci-cloud02
11 10.10.10.62 hostname=hci-cloud03

[compute]
14 10.10.10.60
15 10.10.10.61
16 10.10.10.62



vi start.sh
# 管理实例网卡
7 Openstack_Tenant_NIC="eth2"



# 执行脚本
source start.sh
exit

再连接即可(ssh [email protected]/1/2)

Ceph部署

配置ceph集群环境

cd ops-director-kolla/ceph-director/

vim inventory/ceph-cluster.ini
2 [all:vars]
4 ansible_ssh_pass=000000
 7 [mons]
 8 10.10.10.60
 9 10.10.10.61
10 10.10.10.62
11 
12 [osds]
13 10.10.10.60
14 10.10.10.61
15 10.10.10.62


PS: 如果存在不同的磁盘名称情况下,指定磁盘操作如下,如果都一样无需在这操作
12 [osds]
13 10.10.10.60 devices="['/dev/sdb','/dev/sdc','/dev/sdd']"
14 10.10.10.61 devices="['/dev/sde','/dev/sdf','/dev/sdg']"
15 10.10.10.62 devices="['/dev/sdj','/dev/sdk','/dev/sdl']"

配置ceph全局环境

cd ops-director-kolla/ceph-director/

vim group_vars/all.yml
158 yum_repo_ip: 10.10.10.60
329 monitor_interface: eth0
370 public_network: "10.10.10.0/24"
371 cluster_network: "20.20.20.0/24"
408 radosgw_interface: eth0
577 ceph_docker_registry: 10.10.10.60:8086
848 insecure_registries: "10.10.10.60:8086"

osd配置

vim group_vars/osds.yml
36 devices:
37   - /dev/sdb
38   - /dev/sdc
39   - /dev/sdd
40   - /dev/sde

PS: 注意根据自身环境写入,必须是裸盘,而且每台集群磁盘名称一致(磁盘查看命令: lsblk)

```shell vim ceph-director/group_vars/mdss.yml

根据实际服务器配置内存和CPU,可使用默认配置不修改

ceph_mds_docker_memory_limit: "4096m" ceph_mds_docker_cpu_limit: 4 ```

```shell vim ceph-director/group_vars/mgrs.yml

根据实际服务器配置内存和CPU,可使用默认配置不修改

ceph_mgr_docker_memory_limit: "4096m" ceph_mgr_docker_cpu_limit: 1 ```

```shell vim ceph-director/group_vars/mons.yml

根据实际服务器配置内存和CPU,可使用默认配置不修改

ceph_mon_docker_memory_limit: "4096m" ceph_mon_docker_cpu_limit: 1 ```

```shell vim ceph-director/group_vars/rgws.yml

根据实际服务器配置内存和CPU,可使用默认配置不修改

ceph_rgw_docker_memory_limit: "4096m" ceph_rgw_docker_cpu_limit: 8 ```

ceph集群执行

cd /root/ops-director-kolla/ceph-director

source ansible-playbook-command-docker.sh

ceph验证

# 查看容器ID
docker ps | grep ceph

docker exec -it ceph任意容器id bash

#关闭mon监测
ceph config set mon auth_allow_insecure_global_id_reclaim false

查看ceph监控服务
ceph mgr services

访问:
https://10.10.10.60:8443/
用户名:admin
密码:p@ssw0rd

# 查看
cat /root/ops-director-kolla/ceph-director/group_vars/all.yml | grep dashboard_admin_password


http://10.10.10.60:9283/

部署openstack

cd /root/ops-director-kolla

vim inventory/multinode
5 ansible_ssh_pass=000000



vim /etc/kolla/globals.yml
5 kolla_internal_vip_address: "10.10.10.248"
6 docker_registry: "10.10.10.60:8086"
8 network_interface: "eth0"
9 neutron_external_interface: "eth2"
# 如果是服务器那么就使用kvm
35 nova_compute_virt_type: "qemu"
source ops-install.sh
http://10.10.10.248/

用户名:root
密码:cat /etc/kolla/passwords.yml | grep keystone_admin_password

ceph后端存储

glance后端存储

source /etc/kolla/admin-openrc.sh

openstack image create cirros04 --disk-format qcow2 --file cirros-0.4.0-x86_64-disk.img

docker exec -it ceph-mon-hci-cloud01 bash

# 得到一个id,可以查看镜像id是否能匹配上
rbd ls images

cinder后端存储

 openstack volume type create Ceph --public --description "Cephcinder"
openstack volume create cinder01 --size 50
docker exec -it ceph-mon-hci-cloud01 bash

# 得到一个id,可以查看镜像id是否能匹配上
rbd ls volumes

swiift部署与后端存储

source /etc/kolla/admin-openrc.sh

openstack service create --name=swift --description="Swift Service" object-store

openstack user create --domain default --password swift swift

openstack role add --project service --user swift admin

openstack endpoint create swift internal "http://10.10.10.60:8080/swift/v1" --region RegionOne

openstack endpoint create swift public "http://10.10.10.60:8080/swift/v1" --region RegionOne

openstack endpoint create swift admin "http://10.10.10.60:8080/swift/v1" --region RegionOne



vim /etc/ceph/ceph.conf
26 [client]
27 rbd_default_features = 1
28 rgw_dynamic_resharding = False
29 rgw_enable_ops_log = False
30 rgw_enable_usage_log = True
31 rgw_keystone_accepted_admin_roles = admin
32 rgw_keystone_accepted_roles = admin,_member_,member
33 rgw_keystone_admin_domain = Default
# 密码在/etc/kolla/passwords.yml
34 rgw_keystone_admin_password = Kn2e75mEKQDMChTfPnOkFKBtU9bkIu3YzMkeZTPb
35 rgw_keystone_admin_project = admin
36 rgw_keystone_admin_user = admin
37 rgw_keystone_api_version = 3
38 rgw_keystone_token_cache_size = 500
39 rgw_keystone_revocation_interval = 0
40 # openstack_vip
41 rgw_keystone_url = http://10.10.10.248:5000
42 rgw_s3_auth_use_keystone = True
43 rgw_user_quota_bucket_sync_interval = 60
44 rgw_user_quota_sync_interval = 3600



docker restart <rgw_container>
openstack container create ceph_swfit --public

docker exec -it ceph-mon-hci-cloud01 bash

radosgw-admin bucket list

实例后端存储

# 查看网络名称
 cat /etc/kolla/neutron-server/ml2_conf.ini
docker exec -it ceph-mon-hci-cloud01 bash

rbd ls vms

登录