摘要:使用文章创建的集群下使用部署中创建的集群进行的安装在上制作安装包下载官方源下载显得十分缓慢,所以还是选择国内的镜像源,将下载到创建链接下载完成后将解压并创建链接,方便管理修改配置文件中已经提供了配置模板,复制一份
使用文章“Docker创建的集群下使用ansible部署hadoop”中创建的集群进行zookeeper的安装
OS | hostname | IP |
---|---|---|
Centos7 | cluster-master | 172.18.0.2 |
Centos7 | cluster-slave1 | 172.18.0.3 |
Centos7 | cluster-slave1 | 172.18.0.4 |
Centos7 | cluster-slave1 | 172.18.0.5 |
官方源下载显得十分缓慢,所以还是选择国内的镜像源,将zookeeper下载到/opt
[root@cluster-master opt]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/stable/zookeeper-3.4.10.tar.gz创建链接
下载完成后将zookeeper-3.4.10.tar.gz解压并创建链接,方便管理
[root@cluster-master opt]# tar -zxvf zookeeper-3.4.10.tar.gz [root@cluster-master opt]# ln -s zookeeper-3.4.10 zookeeper修改配置文件
/opt/zookeeper/conf中已经提供了zoo_sample.cfg配置模板,复制一份zoo.cfg进行修改即可使用,我的配置项如下:
[root@cluster-master conf]# cp zoo_sample.cfg zoo.cfg
[root@cluster-master conf]# vi zoo.cfg
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/home/zookeeper/data # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.2=172.18.0.2:2888:3888 server.3=172.18.0.3:2888:3888 server.4=172.18.0.4:2888:3888 server.5=172.18.0.5:2888:3888
dataDir做了从新定义,server项使用IP的最后一位,也是为了方便管理
创建shell脚本,完成安装步骤在/opt/zookeeper下新建postinstall.sh创建dataDir目录和myid文件,并写入zoo.cfg中定义的myid值
vi /opt/zookeeper/postinstall.sh #!/bin/bash # zookeeper conf file conf_file="/opt/zookeeper/conf/zoo.cfg" # get myid IP=$(/sbin/ifconfig -a|grep inet|grep -v 127.0.0.1|grep -v inet6 | awk "{print $2}") ID=$(grep ${IP} ${conf_file}|cut -d = -f 1|cut -d . -f 2) # get dataDir dataDir=$(grep dataDir ${conf_file}|grep -v "^#"|cut -d = -f 2) # create dataDir and myid file mkdir -p ${dataDir} :>${dataDir}/myid echo ${ID} > ${dataDir}/myid打包配置完成后的zookeeper,准备上传至slave主机
将链接zookeeper和目录zookeeper-3.4.10打包并压缩
[root@cluster-master opt]# tar -zcvf zookeeper-dis.tar.gz zookeeper zookeeper-3.4.10创建yaml,安装zookeeper
[root@cluster-master opt]# vi install-zookeeper.yaml
--- - hosts: slaves tasks: - name: install ifconfig yum: name=net-tools state=latest - name: unarchive zookeeper unarchive: src=/opt/zookeeper-dis.tar.gz dest=/opt - name: postinstall shell: bash /opt/zookeeper/postinstall.sh分发安装文件到slave主机
[root@cluster-master opt]# ansible-playbook install-zookeeper.yaml启动zookeeper
此时,zookeeper集群已经可以正常启动
[root@cluster-master opt]# ansible cluster -m command -a "/opt/zookeeper/bin/zkServer.sh start"查看状态
[root@cluster-master bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Mode: follower运行客户端
[root@cluster-master bin]# ./zkCli.sh -server localhost:2181 Connecting to localhost:2181 2017-08-29 18:05:36,078 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT 2017-08-29 18:05:36,091 [myid:] - INFO [main:Environment@100] - Client environment:host.name=cluster-master 2017-08-29 18:05:36,091 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_141 2017-08-29 18:05:36,098 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation 2017-08-29 18:05:36,098 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-1.b16.el7_3.x86_64/jre 2017-08-29 18:05:36,098 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/zookeeper/bin/../zookeeper-3.4.10.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf: 2017-08-29 18:05:36,099 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2017-08-29 18:05:36,100 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp 2017-08-29 18:05:36,100 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=总结2017-08-29 18:05:36,100 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux 2017-08-29 18:05:36,100 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64 2017-08-29 18:05:36,100 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.10.0-514.26.2.el7.x86_64 2017-08-29 18:05:36,101 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root 2017-08-29 18:05:36,101 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root 2017-08-29 18:05:36,101 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/opt/zookeeper-3.4.10/bin 2017-08-29 18:05:36,124 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@25f38edc 2017-08-29 18:05:36,205 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) Welcome to ZooKeeper! JLine support is enabled 2017-08-29 18:05:36,730 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session 2017-08-29 18:05:36,795 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x25e2f22aa660001, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0] ls / [zookeeper] [zk: localhost:2181(CONNECTED) 1]
使用到了之前创建的集群和ansible,发现使用ansible部署应用确实很方便。
文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。
转载请注明本文地址:https://www.ucloud.cn/yun/27013.html
摘要:基于安装好的和集群部署创建的集群下使用部署创建的集群下使用部署在上制作安装包下载创建目录,并将软件包现在到这个目录,依然使用国内镜像下载。部署使用执行完成的部署工作。 基于安装好的hadoop和zookeeper集群部署hbase Docker创建的集群下使用ansible部署hadoop Docker创建的集群下使用ansible部署zookeeper OS hostname...
摘要:或许你的第一次微服务体验,就从本文开始在本文中,和等纷纷亮相,并配有详细的代码说明。该角色与本地网络及的配置设置相关。由于会在虚拟机初始化过程中自动执行配置任务,因此惟一的解决办法就是将相关内容提取至单独的剧本当中 这是一篇温和有趣的技术文章,如果你初识Docker,对微服务充满兴趣,不妨一读。或许你的第一次微服务体验,就从本文开始…… 在本文中,Mesos、Zookeeper、Ma...
摘要:今天小数给大家带来的是数人云工程师金烨的分享,有关于自动快速部署服务相关组件的一些实践。当与相遇,双剑合璧,一切变得如此简单有趣。通过将服务注册到来做健康检查。 今天小数给大家带来的是数人云工程师金烨的分享,有关于自动快速部署DCOS服务相关组件的一些实践。当Ansible与Docker相遇,双剑合璧,一切变得如此简单有趣。 本次分享将包括以下内容: 云平台部署使用的服务、组件 Do...
阅读 1406·2021-11-24 10:20
阅读 3648·2021-11-24 09:38
阅读 2293·2021-09-27 13:37
阅读 2196·2021-09-22 15:25
阅读 2270·2021-09-01 18:33
阅读 3487·2019-08-30 15:55
阅读 1782·2019-08-30 15:54
阅读 2080·2019-08-30 12:50