摘要:和,容器中的这三个文件不存在于镜像,而是存在于,在启动容器的时候,通过的形式将这些文件挂载到容器内部。
基于docker1.7.03.1单机上部署hadoop2.7.3分布式集群
[TOC]
声明文章均为本人技术笔记,转载请注明出处:
[1] https://segmentfault.com/u/yzwall
[2] blog.csdn.net/j_dark/
PC:ubuntu 16.04.1 LTS
Docker version:17.03.1-ce OS/Arch:linux/amd64
Hadoop version:hadoop-2.7.3
1 docker中配置构建hadoop镜像 1.1 创建docker容器container创建基于ubuntu镜像的容器container,官方默认下载ubuntu最新精简版镜像;
sudo docker run -ti container ubuntu
修改默认源文件/etc/apt/source.list,用国内源代替官方源;
1.3 安装java8# docker镜像为了精简容量,删除了许多ubuntu自带组件,通过`apt-get update`更新获得 apt-get update apt-get install software-properties-common python-software-properties # add-apt-repository apt-get install software-properties-commonapt-get install software-properties-common # add-apt-repository add-apt-repository ppa:webupd8team/java apt-get update apt-get install oracle-java8-installer java -version1.4 docker中安装hadoop-2.7.3 1.4.1 下载hadoop-2.7.3源码
# 创建多级目录 mkdir -p /software/apache/hadoop cd /software/apache/hadoop # 下载并解压hadoop wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz tar xvzf hadoop-2.7.3.tar.gz1.4.2 配置环境变量
修改~/.bashrc文件。在文件末尾加入下面配置信息:
export JAVA_HOME=/usr/lib/jvm/java-8-oracle export HADOOP_HOME=/software/apache/hadoop/hadoop-2.7.3 export HADOOP_CONFIG_HOME=$HADOOP_HOME/etc/hadoop export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin
source ~/.bashrc使环境变量配置生效;
注意:完成./bashrc文件配置后,hadoop-env.sh无需再配置;
配置hadoop主要配置core-site.xml、hdfs-site.xml、mapred-site.xml, yarn-site.xml三个文件;
在$HADOOP_HOME下创建namenode, datanode和tmp目录
cd $HADOOP_HOME mkdir tmp mkdir namenode mkdir datanode1.5.1 配置core.site.xml
配置项hadoop.tmp.dir指向tmp目录
配置项fs.default.name指向master节点,配置为hdfs://master:9000
1.5.2 配置hdfs-site.xmlhadoop.tmp.dir /software/apache/hadoop/hadoop-2.7.3/tmp A base for other temporary directories. io.file.buffer.size 131072 fs.default.name hdfs://master:9000 true The name of the default file system.
dfs.replication表示节点数目,配置集群1个namenode,3个datanode,设置备份数为4;
dfs.namenode.name.dir和dfs.datanode.data.dir分别配置为之前创建的NameNode和DataNode的目录路径
1.5.3 配置mapred-site.xmldfs.namenode.secondary.http-address master:9001 dfs.replication 3 true Default block replication. dfs.namenode.name.dir /software/apache/hadoop/hadoop-2.7.3/namenode true dfs.datanode.data.dir /software/apache/hadoop/hadoop-2.7.3/datanode true dfs.webhdfs.enabled true
在$HADOOP_HOME下使用cp命令创建mapred-site.xml
cd $HADOOP_HOME cp mapred-site.xml.template mapred-site.xml
配置mapred-site.xml,配置项mapred.job.tracker指向master节点;
在hadoop 2.x.x中,用户无需配置mapred.job.tracker,因为JobTracker已经不存在,功能由组件MRAppMaster实现,因此需要用mapreduce.framework.name指定运行框架名称,指定yarn
——《Hadoop技术内幕:深入解析YARN架构设计与实现原理》
1.5.4 配置yarn-site.xmlmapreduce.framework.name yarn mapreduce.jobhistory.address master:10020 mapreduce.jobhistory.address master:19888
1.5.5 安装vim,ifconfig与pingyarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address master:8032 yarn.resourcemanager.scheduler.address master:8030 yarn.resourcemanager.resource-tracker.address master:8031 yarn.resourcemanager.admin.address master:8033 yarn.resourcemanager.webapp.address master:8088
安装ifconfig与ping命令所需软件包
apt-get update apt-get install vim apt-get install net-tools # for ifconfig apt-get install inetutils-ping # for ping1.5.6 构建hadoop基础镜像
假设当前容器名为container,保存基础镜像为ubuntu:hadoop,后续hadoop集群容器都根据该镜像创建启动,无需重复配置;
sudo docker commit -m "hadoop installed" container ubuntu:hadoop /bin/bash
分别根据基础镜像ubuntu:hadoop创建mater容器和slave1~3容器,各自主机名与容器名一致;
创建master:docker run -ti -h master --name master ubuntu:hadoop /bin/bash
创建slave1:docker run -ti -h slave1 --name slave1 ubuntu:hadoop /bin/bash
创建slave2:docker run -ti -h slave2 --name slave2 ubuntu:hadoop /bin/bash
创建slave3:docker run -ti -h slave3 --name slave3 ubuntu:hadoop /bin/bash
在各容器的/etc/hosts中添加以下内容,各容器ip地址通过ifconfig查看:
master 172.17.0.2 slave1 172.17.0.3 slave2 172.17.0.4 slave3 172.17.0.5
注意:docker容器重启后,hosts内容可能会失效,经验不足暂时只能避免容器频繁重启,否则得手动再次配置hosts文件;
参考http://dockone.io/question/400
2.3 集群节点SSH配置 2.3.1 所有节点:安装ssh1./etc/hosts, /etc/resolv.conf和/etc/hostname,容器中的这三个文件不存在于镜像,而是存在于/var/lib/docker/containers/
,在启动容器的时候,通过mount的形式将这些文件挂载到容器内部。因此,如果在容器中修改这些文件的话,修改部分不会存在于容器的top layer,而是直接写入这三个物理文件中。
2.为什么重启后修改内容不存在?原因是:每次Docker在启动容器的时候,通过重新构建新的/etc/hosts文件,这又是为什么呢?原因是:容器重启,IP地址为改变,hosts文件中原来的IP地址无效,因此理应修改hosts文件,否则会产生脏数据。?原因是:每次Docker在启动容器的时候,通过重新构建新的/etc/hosts文件,这又是为什么呢?原因是:容器重启,IP地址为改变,hosts文件中原来的IP地址无效,因此理应修改hosts文件,否则会产生脏数据。1./etc/hosts, /etc/resolv.conf和/etc/hostname,容器中的这三个文件不存在于镜像,而是存在于/var/lib/docker/containers/,在启动容器的时候,通过mount的形式将这些文件挂载到容器内部。因此,如果在容器中修改这些文件的话,修改部分不会存在于容器的top layer,而是直接写入这三个物理文件中。
apt-get update apt-get install ssh apt-get install openssh-server2.3.2 所有节点:生成随机密钥
# 生成无密码密钥,生成密钥位于~/.ssh下 ssh-keygen -t rsa -P ""2.3.3 master节点:生成证书文件authorized_keys
将生成的公钥写入authorized_keys中
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys2.3.4 所有节点:修改sshd_config文件
通过修改sshd_config文件,保证ssh可远程登陆其他节点的root用户
vim /etc/ssh/sshd_config # 将PermitRootLogin prohibit-password修改为PermitRootLogin yes # 重启ssh服务 service ssh restart2.3.5 master节点:通过scp传输证书到slave节点
传输master节点上的authorized_keys到其他slave节点~/.ssh下,覆盖同名文件;保证所有节点的证书一致,因此可以实现任意节点间可以通过ssh访问;
cd ~/.ssh scp authorized_keys root@slave1:~/.ssh/ scp authorized_keys root@slave2:~/.ssh/ scp authorized_keys root@slave3:~/.ssh/2.3.6 slave节点:修改证书权限确保生效
chmod 600 ~/.ssh/authorized_keys注意
查看ssh服务是否开启:ps -e | grep ssh
开启ssh服务:service ssh start
重启ssh服务:service ssh restart
完成2.3.1操作后,各个容器之间可通过ssh访问;
2.4 master节点配置在master节点中,修改slaves文件配置slave节点
cd $HADOOP_CONFIG_HOME/ vim slaves
将其中内容覆盖为:
slave1 slave2 slave32.5 启动hadoop集群
进入master节点,
执行hdfs namenode -format,出现类似信息表示namenode格式化成功:
common.Storage: Storage directory /software/apache/hadoop/hadoop-2.7.3/namenode has been successfully formatted.
执行start_all.sh启动集群:
root@master:/# start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [master] The authenticity of host "master (172.17.0.2)" can"t be established. ECDSA key fingerprint is SHA256:OewrSOYpvfDE6ixf6Gw9U7I9URT2zDCCtDJ6tjuZz/4. Are you sure you want to continue connecting (yes/no)? yes master: Warning: Permanently added "master,172.17.0.2" (ECDSA) to the list of known hosts. master: starting namenode, logging to /software/apache/hadoop/hadoop-2.7.3/logs/hadoop-root-namenode-master.out slave3: starting datanode, logging to /software/apache/hadoop/hadoop-2.7.3/logs/hadoop-root-datanode-slave3.out slave2: starting datanode, logging to /software/apache/hadoop/hadoop-2.7.3/logs/hadoop-root-datanode-slave2.out slave1: starting datanode, logging to /software/apache/hadoop/hadoop-2.7.3/logs/hadoop-root-datanode-slave1.out Starting secondary namenodes [master] master: starting secondarynamenode, logging to /software/apache/hadoop/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-master.out starting yarn daemons starting resourcemanager, logging to /software/apache/hadoop/hadoop-2.7.3/logs/yarn-root-resourcemanager-master.out slave3: starting nodemanager, logging to /software/apache/hadoop/hadoop-2.7.3/logs/yarn-root-nodemanager-slave3.out slave1: starting nodemanager, logging to /software/apache/hadoop/hadoop-2.7.3/logs/yarn-root-nodemanager-slave1.out slave2: starting nodemanager, logging to /software/apache/hadoop/hadoop-2.7.3/logs/yarn-root-nodemanager-slave2.out
分别在master,slave节点中执行jps,
master:
root@master:/# jps 2065 Jps 1446 NameNode 1801 ResourceManager 1641 SecondaryNameNode
slave1:
1107 NodeManager 1220 Jps 1000 DataNode
slave2:
241 DataNode 475 Jps 348 NodeManager
slave3:
500 Jps 388 NodeManager 281 DataNode3. 执行wordcount
在hdfs中创建输入目录/hadoopinput,并将输入文件LICENSE.txt存储在该目录下:
root@master:/# hdfs dfs -mkdir -p /hadoopinput root@master:/# hdfs dfs -put LICENSE.txt /hadoopint
进入$HADOOP_HOME/share/hadoop/mapreduce,提交wordcount任务给集群,将计算结果保存在hdfs中的/hadoopoutput目录下:
root@master:/# cd $HADOOP_HOME/share/hadoop/mapreduce root@master:/software/apache/hadoop/hadoop-2.7.3/share/hadoop/mapreduce# hadoop jar hadoop-mapreduce-examples-2.7.3.jar wordcount /hadoopinput /hadoopoutput 17/05/26 01:21:34 INFO client.RMProxy: Connecting to ResourceManager at master/172.17.0.2:8032 17/05/26 01:21:35 INFO input.FileInputFormat: Total input paths to process : 1 17/05/26 01:21:35 INFO mapreduce.JobSubmitter: number of splits:1 17/05/26 01:21:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1495722519742_0001 17/05/26 01:21:36 INFO impl.YarnClientImpl: Submitted application application_1495722519742_0001 17/05/26 01:21:36 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1495722519742_0001/ 17/05/26 01:21:36 INFO mapreduce.Job: Running job: job_1495722519742_0001 17/05/26 01:21:43 INFO mapreduce.Job: Job job_1495722519742_0001 running in uber mode : false 17/05/26 01:21:43 INFO mapreduce.Job: map 0% reduce 0% 17/05/26 01:21:48 INFO mapreduce.Job: map 100% reduce 0% 17/05/26 01:21:54 INFO mapreduce.Job: map 100% reduce 100% 17/05/26 01:21:55 INFO mapreduce.Job: Job job_1495722519742_0001 completed successfully 17/05/26 01:21:55 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=29366 FILE: Number of bytes written=295977 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=84961 HDFS: Number of bytes written=22002 HDFS: Number of read operations=6 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=1 Launched reduce tasks=1 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=2922 Total time spent by all reduces in occupied slots (ms)=3148 Total time spent by all map tasks (ms)=2922 Total time spent by all reduce tasks (ms)=3148 Total vcore-milliseconds taken by all map tasks=2922 Total vcore-milliseconds taken by all reduce tasks=3148 Total megabyte-milliseconds taken by all map tasks=2992128 Total megabyte-milliseconds taken by all reduce tasks=3223552 Map-Reduce Framework Map input records=1562 Map output records=12371 Map output bytes=132735 Map output materialized bytes=29366 Input split bytes=107 Combine input records=12371 Combine output records=1906 Reduce input groups=1906 Reduce shuffle bytes=29366 Reduce input records=1906 Reduce output records=1906 Spilled Records=3812 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=78 CPU time spent (ms)=1620 Physical memory (bytes) snapshot=451264512 Virtual memory (bytes) snapshot=3915927552 Total committed heap usage (bytes)=348127232 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=84854 File Output Format Counters Bytes Written=22002
计算结果保存在/hadoopoutput/part-r-00000中,查看结果:
root@master:/# hdfs dfs -ls /hadoopoutput Found 2 items -rw-r--r-- 3 root supergroup 0 2017-05-26 01:21 /hadoopoutput/_SUCCESS -rw-r--r-- 3 root supergroup 22002 2017-05-26 01:21 /hadoopoutput/part-r-00000 root@master:/# hdfs dfs -cat /hadoopoutput/part-r-00000 ""AS 2 "AS 16 "COPYRIGHTS 1 "Contribution" 2 "Contributor" 2 "Derivative 1 "Legal 1 "License" 1 "License"); 1 "Licensed 1 "Licensor" 1 ...
至此,基于docker1.7.03单机上部署hadoop2.7.3集群圆满成功!
参考[1] http://tashan10.com/yong-dockerda-jian-hadoopwei-fen-bu-shi-ji-qun/
[2] http://blog.csdn.net/xiaoxiangzi222/article/details/52757168
文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。
转载请注明本文地址:https://www.ucloud.cn/yun/26925.html
摘要:今天,阿里资深技术专家天羽为我们讲述阿里数据库的极致弹性之路。二容器化弹性,提升资源效率随着单机服务器的能力提升,阿里数据库在年就开始使用单机多实例的方案,通过和文件系统目录端口的部署隔离,支持单机多实例,把单机资源利用起来。 showImg(https://segmentfault.com/img/remote/1460000017333275); 阿里妹导读:数据库从IOE(IBM...
摘要:今天,阿里资深技术专家天羽为我们讲述阿里数据库的极致弹性之路。二容器化弹性,提升资源效率随着单机服务器的能力提升,阿里数据库在年就开始使用单机多实例的方案,通过和文件系统目录端口的部署隔离,支持单机多实例,把单机资源利用起来。 showImg(https://segmentfault.com/img/remote/1460000017333275); 阿里妹导读:数据库从IOE(IBM...
摘要:项目地址前言大数据技术栈思维导图大数据常用软件安装指南一分布式文件存储系统分布式计算框架集群资源管理器单机伪集群环境搭建集群环境搭建常用命令的使用基于搭建高可用集群二简介及核心概念环境下的安装部署和命令行的基本使用常用操作分区表和分桶表视图 项目GitHub地址:https://github.com/heibaiying... 前 言 大数据技术栈思维导图 大数据常用软件安装指...
阅读 3287·2021-11-22 15:22
阅读 2848·2021-10-12 10:12
阅读 2139·2021-08-21 14:10
阅读 3798·2021-08-19 11:13
阅读 2827·2019-08-30 15:43
阅读 3204·2019-08-29 16:52
阅读 387·2019-08-29 16:41
阅读 1414·2019-08-29 12:53