Ansible部署kafka集群
# Ansible部署kafka集群
节点 | IP | 主机名 |
---|---|---|
ansible | 172.25.253.137 | ansible |
node1 | 172.25.253.138 | zookeeper1 |
node2 | 172.25.253.139 | zookeeper2 |
node3 | 172.25.253.140 | zookeeper3 |
# 查看文件树
[root@ansible ansible]# tree role/
role/
├── kafka
│ ├── file
│ │ └── run.sh
│ ├── tasks
│ │ ├── kafka.yaml
│ │ └── main.yaml
│ ├── templates
│ │ ├── server.properties1.j2
│ │ ├── server.properties2.j2
│ │ └── server.properties3.j2
│ └── vars
│ └── main.yaml
├── playbook.yaml
└── zookeeper
├── file
│ ├── hosts
│ ├── local.repo
│ └── run.sh
├── tasks
│ ├── main.yaml
│ ├── start.yaml
│ └── zookeeper.yaml
├── templates
│ ├── myid1.j2
│ ├── myid2.j2
│ ├── myid3.j2
│ └── zoo.cfg.j2
├── vars
│ └── main.yaml
└── zookeeper-3.4.14.tar.gz
# Ansible部署Kafka集群
- 部署kafka集群需要提前部署zookeeper集群,所以要部署zookeeper集群
- 使用role的两个角色扮演的部署集群方式
- 角色分别是zookeeper角色和kafka角色
- zookeeper角色主要是负责zookeeper环境的部署
- kafka角色主要是负责kafka集群的部署
# 配置ansible的主机清单
[root@ansible ansible]# cat /etc/ansible/hosts
[server]
172.25.253.138
172.25.253.140
172.25.253.139
# 创建角色和角色需要的文件
- 分别创建file tasks vars templates
- 并且上传两个角色相对应的压缩包
[root@ansible ansible]# mkdir role/{kafka,zookeeper}
[root@ansible ansible]# touch role/kafka/{file,vars,tasks,templates}
[root@ansible ansible]# touch role/zookeeper/{file,vars,tasks,templates}
[root@ansible ansible]# ll role/zookeeper/
drwxr-xr-x 2 root root 51 Oct 13 22:12 file
drwxr-xr-x 2 root root 63 Oct 14 13:33 tasks
drwxr-xr-x 2 root root 72 Oct 14 13:40 templates
drwxr-xr-x 2 root root 23 Oct 14 13:38 vars
-rw-r--r-- 1 root root 37676320 Nov 12 2020 zookeeper-3.4.14.tar.gz
[root@ansible ansible]# ll role/kafka/
drwxr-xr-x 2 root root 20 Oct 14 11:39 file
drwxr-xr-x 2 root root 41 Oct 14 13:15 tasks
drwxr-xr-x 2 root root 93 Oct 14 11:28 templates
drwxr-xr-x 2 root root 23 Oct 14 11:26 vars
-rw-r--r-- 1 root root 57471165 Nov 12 2020 kafka_2.11-1.1.1.tgz
# Zookeeper角色
# Zookeeper
# 1.基础环境部署
- 复制hosts文件到file目录下并且修改
- 复制local.repo到file目录下并且修改
- 编写run.sh脚本作为zookeeper集群的启动脚本
[root@ansible file]# cat hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.253.138 zookeeper1
172.25.253.139 zookeeper2
172.25.253.140 zookeeper3
[root@ansible file]# cat local.repo
[centos]
name=centos
baseurl=http://172.25.253.144/file/centos/
gpgcheck=0
[gpmall-repo]
name=gpmall-repo
baseurl=http://172.25.253.144/file/gpmall-repo/
gpgcheck=0
[root@ansible file]# cat run.sh
#!/bin/bash
cd /opt/zookeeper-3.4.14/bin/
./zkServer.sh start
./zkServer.sh restart
# 2.编写playbook
- 在tasks目录里编写zookeeper的清单
- 编写zookeeper.yaml作为安装zookeeper的环境清单
- 编写start.yaml作为开启集群的清单
- 编写main.yaml指定顺序
[root@ansible tasks]# cat zookeeper.yaml
- name: copy hosts
copy: src=./file/hosts dest=/etc/hosts
- name: delete repo
shell: rm -rf /etc/yum.repos.d/*
- name: copy repo
copy: src=./file/local.repo dest=/etc/yum.repos.d/local.repo
- name: Install Packages
yum:
name:
- java-1.8.0-openjdk
- java-1.8.0-openjdk-devel
state: latest
- name: tar tar
unarchive: src=zookeeper-3.4.14.tar.gz dest=/opt/zookeeper-3.4.14/
- name: delete file
file: path=/opt/zookeeper-3.4.14/conf/zoo_sample.cfg state=absent
- name: add config
template: src=zoo.cfg.j2 dest=/opt/zookeeper-3.4.14/conf/zoo.cfg owner=2002 group=2002 mode=664
- name: create file
file: path=/tmp/zookeeper state=directory
- name: myid1
template: src=myid1.j2 dest=/tmp/zookeeper/myid
when: ansible_fqdn == "zookeeper1"
- name: myid2
template: src=myid2.j2 dest=/tmp/zookeeper/myid
when: ansible_fqdn == "zookeeper2"
- name: myid3
template: src=myid3.j2 dest=/tmp/zookeeper/myid
when: ansible_fqdn == "zookeeper3"
[root@ansible tasks]# cat start.yaml
- name: start zookeeper
script: /root/ansible/role/zookeeper/file/run.sh
when: ansible_fqdn == {{ "item" }}
with_items:
- zookeeper1
- zookeeper2
- zookeeper3
[root@ansible tasks]# cat main.yaml
- include: zookeeper.yaml
- include: start.yaml
[root@ansible tasks]#
# 3.添加变量vars
- 在vars的目录main.yaml添加如下变量
- 指定网段加端口号
[root@ansible vars]# cat main.yaml
---
ip1: 172.25.253.138:2888:3888
ip2: 172.25.253.139:2888:3888
ip3: 172.25.253.140:2888:3888
# 4.修改templates的j2文件
- 添加myid的三个文件内容分别是1、2、3
- 配置zoo.cfg文件需要添加三个server的id以及ip
[root@ansible templates]# echo 1 > myid1.j2
[root@ansible templates]# echo 2 > myid2.j2
[root@ansible templates]# echo 3 > myid3.j2
[root@ansible templates]# cat zoo.cfg.j2
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1={{ ip1 }}
server.2={{ ip2 }}
server.3={{ ip3 }}
[root@ansible templates]# ll
total 16
-rw-r--r-- 1 root root 2 Oct 14 13:35 myid1.j2
-rw-r--r-- 1 root root 2 Oct 14 13:35 myid2.j2
-rw-r--r-- 1 root root 2 Oct 13 21:38 myid3.j2
-rw-r--r-- 1 root root 982 Oct 14 13:40 zoo.cfg.j2
# Kafka角色
# kafka
# 1.编写playbook
- 在kafka角色中只需要传送压缩包解压缩包括替换文件
- 编排akfka的playboot
[root@ansible tasks]# cat kafka.yaml
- name: tar tar
unarchive: src=kafka/kafka_2.11-1.1.1.tgz dest=/media/
- name: change config1
template: src=server.properties1.j2 dest=/media/kafka_2.11-1.1.1/config/server.properties
when: ansible_fqdn == "zookeeper1"
- name: change config2
template: src=server.properties2.j2 dest=/media/kafka_2.11-1.1.1/config/server.properties
when: ansible_fqdn == "zookeeper2"
- name: change config3
template: src=server.properties3.j2 dest=/media/kafka_2.11-1.1.1/config/server.properties
when: ansible_fqdn == "zookeeper3"
- name: run kafka
shell: cd /media/kafka_2.11-1.1.1/bin/ && ./kafka-server-start.sh -daemon ../config/server.properties
[root@ansible tasks]# cat main.yaml
- include: kafka.yaml
[root@ansible tasks]# cat main.yaml
- include: kafka.yaml
# 2.添加变量vars
- 在vars的目录中添加相对应的变量
[root@ansible vars]# cat main.yaml
---
id1: 1
id2: 2
id3: 3
cluster_ip: 172.25.253.138:2181,172.25.253.139:2181,172.25.253.140:2181
localip1: 172.25.253.138:9092
localip2: 172.25.253.139:9092
localip3: 172.25.253.140:9092
# 3.修改j2文件
- 修改templates的目录下的三个j2文件
在配置文件中找到以下两行并注释掉(在文本前加#)如下所示:
#broker.id=0
#zookeeper.connect=localhost:2181
然后在配置文件的底部添加如下 3 个配置。三个节点分别修改不同的变量 zookeeper1使用id1、zookeeper2使用id2
zookeeper1 节点:
broker.id= {{ id1 }}
zookeeper.connect= {{ cluster_ip }}
listeners=PLAINTEXT://{{ localip1 }}
zookeeper2 节点:
broker.id= {{ id2 }}
zookeeper.connect= {{ cluster_ip }}
listeners=PLAINTEXT://{{ localip2 }}
zookeeper3 节点:
broker.id= {{ id3 }}
zookeeper.connect= {{ cluster_ip }}
listeners=PLAINTEXT://{{ localip3 }}
[root@ansible templates]# ll
total 24
-rw-r--r-- 1 root root 6953 Oct 14 11:27 server.properties1.j2
-rw-r--r-- 1 root root 6952 Oct 14 11:28 server.properties2.j2
-rw-r--r-- 1 root root 6951 Oct 14 11:28 server.properties3.j2
# Role的playbook
# 5.编写role的playbook分步执行
[root@ansible role]# cat playbook.yaml
- hosts: server
remote_user: root
roles:
- role: zookeeper
- role: kafka
# 集群验证
# 1.执行脚本
[root@ansible role]# ansible-playbook playbook.yaml
PLAY [server] **************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************
ok: [172.25.253.139]
ok: [172.25.253.140]
ok: [172.25.253.138]
TASK [zookeeper : copy hosts] **********************************************************************************************
ok: [172.25.253.140]
ok: [172.25.253.139]
ok: [172.25.253.138]
·
·
·
PLAY RECAP *****************************************************************************************************************
172.25.253.138 : ok=14 changed=8 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0
172.25.253.139 : ok=14 changed=8 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0
172.25.253.140 : ok=14 changed=8 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0
# 2.验证集群是否正常
- 查看端口开放
[root@ansible role]# ansible server -m shell -a "ss -ntpl | grep 9092"
172.25.253.140 | CHANGED | rc=0 >>
LISTEN 0 50 ::ffff:172.25.253.140:9092 :::* users:(("java",pid=6105,fd=99))
172.25.253.139 | CHANGED | rc=0 >>
LISTEN 0 50 ::ffff:172.25.253.139:9092 :::* users:(("java",pid=2783,fd=99))
172.25.253.138 | CHANGED | rc=0 >>
LISTEN 0 50 ::ffff:172.25.253.138:9092 :::* users:(("java",pid=5814,fd=99))
- 验证
zookeeper1:
[root@localhost bin]# ./kafka-topics.sh --create --zookeeper 172.25.253.138:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".
zookeeper2:
[root@localhost bin]# ./kafka-topics.sh --list --zookeeper 172.25.253.139:2181
test
上次更新: 2023/11/28, 22:03:59