你的位置:首页 > 数据库

[数据库]hadoop2.6分布式部署时 livenodes等于1的原因


1.问题描述

在进行hadoop2.x版本的hdfs分布式部署时,遇到了一个奇怪的问题:

使用start-dfs.sh命令启动dfs之后,所有的datanode节点上均能看到datanode进程,然而在namenode的web UI上,显示live nodes数目为1.


2.问题分析

打开hadoop2.x/logs文件夹下的hadoop-root-datanode.log文件,发现里面报了一个很有趣的异常:

2015-12-20 22:55:21,374 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-362474484-127.0.1.1-1450617599362 (Datanode Uuid d3a052d5-7319-4bdf-98e1-6eea4637cb3d) service to 192.168.1.126/192.168.1.126:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=192.168.1.125, hostname=192.168.1.125): DatanodeRegistration(0.0.0.0, datanodeUuid=d3a052d5-7319-4bdf-98e1-6eea4637cb3d, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-09307029-b7c7-4163-b2f1-b96f6c630758;nsid=1007280041;c=0)  at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:889)  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5048)  at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1142)  at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)  at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:27329)  at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)  at java.security.AccessController.doPrivileged(Native Method)  at javax.security.auth.Subject.doAs(Subject.java:422)  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

可以看到,它说因为不能解析主机地址,datanode拒绝通信。

那么我们去查hadoop的文档,发现它的hdfs-default.

If true (the default), then the namenode requires that a connecting datanode's address must be resolved to a hostname. If necessary, a reverse DNS lookup is performed. All attempts to register a datanode from an unresolvable address are rejected. It is recommended that this setting be left on to prevent accidental registration of datanodes listed by hostname in the excludes file during a DNS outage. Only set this to false in environments where there is no infrastructure to support reverse DNS lookup.

所以,当我们在配置datanode时,如果不是使用了主机名加dns解析或者hosts文件解析的方式,而是直接使用ip地址去配置slaves文件,那么就会产生这个错误。


 

3、修正方式

把以下内容加入到hdfs-site.中,并同步至所有节点即可。

<property>  <name>dfs.namenode.datanode.registration.ip-hostname-check</name>  <value>false</value></property>