利用ansible进行自动化构建etcd集群
上一篇进行了手动安装etcd集群,此篇利用自动化工具ansible为三个节点构建etcd集群环境:
master:192.168.101.14,node1:192.168.101.15,node2:192.168.101.19
1、首先查看该role(etcd集群)的目录树结构:
# tree
.
├── ansible.cfg
├── hosts
├── roles
│ └── etcd
│ ├── files
│ │ ├── etcd
│ │ └── etcdctl
│ ├── handlers
│ ├── meta
│ ├── tasks
│ │ └── main.yaml
│ ├── templates
│ │ └── etcd.service.j2
│ └── vars
└── work_dir
├── install_etcd_cluster.retry
└── install_etcd_cluster.yaml
首先在hosts文件中定义节点:
# egrep -v "^#|^$" hosts
192.168.101.14
192.168.101.15
192.168.101.19
在roles目录下面创建了etcd这个角色,角色目录下面为个节点提供了命令文件etcd、etcdctl,然后查看tasks下面的main.yaml:
# cat roles/etcd/tasks/main.yaml
- name: copy etcdto nodes copy:
src: ..
/files/etcd dest:
/usr/local/bin/etcd mode:
0750
- name: copy etcdctl to nodes
copy:
src: ../files/etcdctl
dest: /usr/local/bin/etcdctl
mode: 0750
- name: create data directory for etcd
file:
path: /var/lib/etcd
state: directory
- name: provide etcd.service to nodes
template:
src: etcd.service.j2
dest: /usr/lib/systemd/system/etcd.service
register: result
- name: start etcd service
systemd:
daemon_reload: true
name: etcd
state: started
enabled: true
when: result|success
前面三个任务是copy命令文件到各节点,和在各节点上创建数据目录,在下面的任务中定义了template,首先查看template下面的j2文件:
# cat roles/etcd/templates/etcd.service.j2
Description
=etcd server
After
=network.target
After
=network-online.target
Wants
=network-online.target
Type
=notify
WorkingDirectory
=/var/lib/etcd/
EnvironmentFile
=-/etc/etcd/etcd.conf
ExecStart
=/usr/local/bin/etcd --name {{ ansible_hostname }} --initial-advertise-peer-urls http://{{ ansible_ens33.ipv4.address }}:2380 --listen-peer-urls http://{{ ansible_ens33.ipv4.address }}:2380 --listen-client-urls http://{{ ansible_ens33.ipv4.address }}:2379,http://127.0.0.1:2379 --advertise-client-urls http://{{ ansible_ens33.ipv4.address }}:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380,node2=http://192.168.101.19:2380 --initial-cluster-state new --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
WantedBy=multi-user.target
可以看见上面的j2文件里面运用了变量{{ ansible_hostname }}和{{ ansible_ens33.ipv4.address }},这两个变量可以根据模块setup进行获得(获取各节点的hostname和ip地址)
在运用了template模板下面,在j2文件中定义了变量,于是ansible将该j2文件分发给各节点上,然后各节点根据自己的hostname和ip进行相应修改,于是创建的配置文件与自己的hostname和ip是一一对应的
在配置文件中使用变量可以使用template模块,创建对应的j2文件
# cat work_dir/install_etcd_cluster.yaml
- hosts: etcd_cluster remote_user: root
roles:
- etcd
# ansible-playbook work_dir/install_etcd_cluster.yaml
PLAY
***********************************************************************************************************************************
TASK
********************************************************************************************************************************
ok: [
192.168.101.19]
ok: [
192.168.101.14]
ok: [
192.168.101.15]
TASK
*********************************************************************************************************************
ok: [
192.168.101.15]
ok: [
192.168.101.19]
ok: [
192.168.101.14]
TASK
*******************************************************************************************************************
ok: [
192.168.101.15]
ok: [
192.168.101.19]
ok: [
192.168.101.14]
TASK [etcd : create data directory
for etcd] **********************************************************************************************************
ok: [
192.168.101.15]
ok: [
192.168.101.19]
ok: [
192.168.101.14]
TASK
***********************************************************************************************************
ok: [
192.168.101.19]
ok: [
192.168.101.15]
ok: [
192.168.101.14]
TASK
**********************************************************************************************************************
changed: [
192.168.101.15]
changed: [
192.168.101.19]
changed: [
192.168.101.14]
PLAY RECAP
********************************************************************************************************************************************
192.168.101.14 : ok=6 changed=1 unreachable=0 failed=0
192.168.101.15 : ok=6 changed=1 unreachable=0 failed=0
192.168.101.19 : ok=6 changed=1 unreachable=0 failed=0
执行完成后,在任意节点上查看member列表:
# etcdctl member list
192d36c71643c39d: name
=node2 peerURLs=http://192.168.101.19:2380 clientURLs=http://192.168.101.19:2379 isLeader=false
5f3835545a5f41e4: name=master peerURLs=http://192.168.101.14:2380 clientURLs=http://192.168.101.14:2379 isLeader=false
77c1ac60c5100363: name=node1 peerURLs=http://192.168.101.15:2380 clientURLs=http://192.168.101.15:2379 isLeader=true
验证集群的监控状态:
# etcdctl cluster-health
member 192d36c71643c39d is healthy: got healthy result from http:
//192.168.101.19:2379
member 5f3835545a5f41e4 is healthy: got healthy result from http://192.168.101.14:2379
member 77c1ac60c5100363 is healthy: got healthy result from http://192.168.101.15:2379
cluster is healthy
于是etcd集群基于ansible的无TSL认证的搭建成功完成
附上当配置文件change之后触发handler:
# cat roles/etcd/handlers/main.yaml
- name: restart etcd systemd:
name: etcd
state: restarted
# cat roles/etcd/tasks/main.yaml
- name: copy etcdto nodes copy:
src: ..
/files/etcd dest:
/usr/local/bin/etcd mode:
0750
- name: copy etcdctl to nodes
copy:
src: ../files/etcdctl
dest: /usr/local/bin/etcdctl
mode: 0750
- name: create data directory for etcd
file:
path: /var/lib/etcd
state: directory
- name: provide etcd.service to nodes
template:
src: etcd.service.j2
dest: /usr/lib/systemd/system/etcd.service
register: result
- name: start etcd service
systemd:
daemon_reload: true
name: etcd
state: started
enabled: true
when: result|success
- name: provide configfile changed to etcd
template:
src: etcd.service_https_auto.j2
dest: /usr/lib/systemd/system/etcd.service
notify: restart etcd
改变后的配置文件:
# cat roles/etcd/templates/etcd.service_https_auto.j2
Description
=etcd server
After
=network.target
After
=network-online.target
Wants
=network-online.target
Type
=notify
WorkingDirectory
=/var/lib/etcd/
EnvironmentFile
=-/etc/etcd/etcd.conf
ExecStart
=/usr/local/bin/etcd --name {{ ansible_hostname }} --auto-tls --peer-auto-tls --initial-advertise-peer-urls https://{{ ansible_ens33.ipv4.address }}:2380 --listen-peer-urls https://{{ ansible_ens33.ipv4.address }}:2380 --listen-client-urls https://{{ ansible_ens33.ipv4.address }}:2379,https://127.0.0.1:2379 --advertise-client-urls https://{{ ansible_ens33.ipv4.address }}:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster master=https://192.168.101.14:2380,node1=https://192.168.101.15:2380,node2=https://192.168.101.19:2380 --initial-cluster-state new --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
WantedBy=multi-user.target
页:
[1]