本文以圖文結(jié)合的方式詳細(xì)介紹了Hadoop 2.x偽分布式環(huán)境搭建的全過(guò)程,供大家參考,具體內(nèi)容如下
1、修改hadoop-env.sh、yarn-env.sh、mapred-env.sh
方法:使用notepad++(beifeng用戶)打開(kāi)這三個(gè)文件
添加代碼:export JAVA_HOME=/opt/modules/jdk1.7.0_67
2、修改core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml配置文件
1)修改core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://Hadoop-senior02.beifeng.com:8020</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/modules/hadoop-2.5.0/data</value> </property></configuration>
2)修改hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.http-address</name> <value>Hadoop-senior02.beifeng.com:50070</value> </property></configuration>
3)修改yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>Hadoop-senior02.beifeng.com</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>86400</value> </property></configuration>
4)修改mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>0.0.0.0:19888</value> </property></configuration>
3、啟動(dòng)hdfs
1)格式化namenode:$ bin/hdfs namenode -format
2)啟動(dòng)namenode:$sbin/hadoop-daemon.sh start namenode
3)啟動(dòng)datanode:$sbin/hadoop-daemon.sh start datanode
4)hdfs監(jiān)控web頁(yè)面:http://hadoop-senior02.beifeng.com:50070
4、啟動(dòng)yarn
1)啟動(dòng)resourcemanager:$sbin/yarn-daemon.sh start resourcemanager
2)啟動(dòng)nodemanager:sbin/yarn-daemon.sh start nodemanager
3)yarn監(jiān)控web頁(yè)面:http://hadoop-senior02.beifeng.com:8088
5、測(cè)試wordcount jar包
1)定位路徑:/opt/modules/hadoop-2.5.0
2)代碼測(cè)試:bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount /input/sort.txt /output6/
運(yùn)行過(guò)程:
16/05/08 06:39:13 INFO client.RMProxy: Connecting to ResourceManager at Hadoop-senior02.beifeng.com/192.168.241.130:8032
16/05/08 06:39:15 INFO input.FileInputFormat: Total input paths to process : 1
16/05/08 06:39:15 INFO mapreduce.JobSubmitter: number of splits:1
16/05/08 06:39:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1462660542807_0001
16/05/08 06:39:16 INFO impl.YarnClientImpl: Submitted application application_1462660542807_0001
16/05/08 06:39:16 INFO mapreduce.Job: The url to track the job: http://Hadoop-senior02.beifeng.com:8088/proxy/application_1462660542807_0001/
16/05/08 06:39:16 INFO mapreduce.Job: Running job: job_1462660542807_0001
16/05/08 06:39:36 INFO mapreduce.Job: Job job_1462660542807_0001 running in uber mode : false
16/05/08 06:39:36 INFO mapreduce.Job: map 0% reduce 0%
16/05/08 06:39:48 INFO mapreduce.Job: map 100% reduce 0%
16/05/08 06:40:04 INFO mapreduce.Job: map 100% reduce 100%
16/05/08 06:40:04 INFO mapreduce.Job: Job job_1462660542807_0001 completed successfully
16/05/08 06:40:04 INFO mapreduce.Job: Counters: 49
3)結(jié)果查看:bin/hdfs dfs -text /output6/par*
運(yùn)行結(jié)果:
hadoop 2
jps 1
mapreduce 2
yarn 1
6、MapReduce歷史服務(wù)器
1)啟動(dòng):sbin/mr-jobhistory-daemon.sh start historyserver
2)web ui界面:http://hadoop-senior02.beifeng.com:19888
7、hdfs、yarn、mapreduce功能
1)hdfs:分布式文件系統(tǒng),高容錯(cuò)性的文件系統(tǒng),適合部署在廉價(jià)的機(jī)器上。
hdfs是一個(gè)主從結(jié)構(gòu),分為namenode和datanode,其中namenode是命名空間,datanode是存儲(chǔ)空間,datanode以數(shù)據(jù)塊的形式進(jìn)行存儲(chǔ),每個(gè)數(shù)據(jù)塊128M
2)yarn:通用資源管理系統(tǒng),為上層應(yīng)用提供統(tǒng)一的資源管理和調(diào)度。
yarn分為resourcemanager和nodemanager,resourcemanager負(fù)責(zé)資源調(diào)度和分配,nodemanager負(fù)責(zé)數(shù)據(jù)處理和資源
3)mapreduce:MapReduce是一種計(jì)算模型,分為Map(映射)和Reduce(歸約)。
map將每一行數(shù)據(jù)處理后,以鍵值對(duì)的形式出現(xiàn),并傳給reduce;reduce將map傳過(guò)來(lái)的數(shù)據(jù)進(jìn)行匯總和統(tǒng)計(jì)。
以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助。
|
新聞熱點(diǎn)
疑難解答
圖片精選