星空网 > 软件开发 > 数据库

hadoop2.6分布式部署时 livenodes等于1的原因

1.问题描述

在进行hadoop2.x版本的hdfs分布式部署时,遇到了一个奇怪的问题:

使用start-dfs.sh命令启动dfs之后,所有的datanode节点上均能看到datanode进程,然而在namenode的web UI上,显示live nodes数目为1.


2.问题分析

打开hadoop2.x/logs文件夹下的hadoop-root-datanode.log文件,发现里面报了一个很有趣的异常:

2015-12-20 22:55:21,374 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-362474484-127.0.1.1-1450617599362 (Datanode Uuid d3a052d5-7319-4bdf-98e1-6eea4637cb3d) service to 192.168.1.126/192.168.1.126:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=192.168.1.125, hostname=192.168.1.125): DatanodeRegistration(0.0.0.0, datanodeUuid=d3a052d5-7319-4bdf-98e1-6eea4637cb3d, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-09307029-b7c7-4163-b2f1-b96f6c630758;nsid=1007280041;c=0)  at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:889)  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5048)  at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1142)  at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)  at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:27329)  at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)  at java.security.AccessController.doPrivileged(Native Method)  at javax.security.auth.Subject.doAs(Subject.java:422)  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

可以看到,它说因为不能解析主机地址,datanode拒绝通信。

那么我们去查hadoop的文档,发现它的hdfs-default.

If true (the default), then the namenode requires that a connecting datanode's address must be resolved to a hostname. If necessary, a reverse DNS lookup is performed. All attempts to register a datanode from an unresolvable address are rejected. It is recommended that this setting be left on to prevent accidental registration of datanodes listed by hostname in the excludes file during a DNS outage. Only set this to false in environments where there is no infrastructure to support reverse DNS lookup.

所以,当我们在配置datanode时,如果不是使用了主机名加dns解析或者hosts文件解析的方式,而是直接使用ip地址去配置slaves文件,那么就会产生这个错误。


 

3、修正方式

把以下内容加入到hdfs-site.中,并同步至所有节点即可。

<property>  <name>dfs.namenode.datanode.registration.ip-hostname-check</name>  <value>false</value></property>

 




原标题:hadoop2.6分布式部署时 livenodes等于1的原因

关键词:

*特别声明:以上内容来自于网络收集,著作权属原作者所有,如有侵权,请联系我们: admin#shaoqun.com (#换成@)。

Vestiaire Collective卖家交易费,运费谁出:https://www.goluckyvip.com/news/6049.html
旺季临近,海运费大幅上涨?:https://www.goluckyvip.com/news/605.html
跨境电商出海合规报告首度发布;宠物用品走红东南亚:https://www.goluckyvip.com/news/6050.html
【融资发布】跨境物流平台“运无界”宣布完成数千万元战略融资:https://www.goluckyvip.com/news/6051.html
Wayfair运营之仓库管理:https://www.goluckyvip.com/news/6052.html
亚洲-北美运输时间创113天历史新高,2021跨境电商保持两位数增长:https://www.goluckyvip.com/news/6053.html
川藏线自驾游要怎么走才比较划算呢?:https://www.vstour.cn/a/411240.html
去日本入住酒店,东西随意用却有一个特殊“要:https://www.vstour.cn/a/411241.html
相关文章
我的浏览记录
海外公司注册 | 跨境电商服务平台 | 深圳旅行社 | 东南亚物流