keyknight

hadoop启动datanode失败的错误原因

日志:hadoop-cdhuser-datanode-zabhb000.log内容

2015-01-16 11:38:29,272 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]

2015-01-16 11:38:30,403 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties

2015-01-16 11:38:30,468 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).

2015-01-16 11:38:30,468 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started

2015-01-16 11:38:30,471 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: File descriptor passing is enabled.

2015-01-16 11:38:30,473 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is zabhb000

2015-01-16 11:38:30,474 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0

2015-01-16 11:38:30,495 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010

2015-01-16 11:38:30,498 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 41943040 bytes/s

2015-01-16 11:38:30,499 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0

2015-01-16 11:38:30,499 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Shutdown complete.

2015-01-16 11:38:30,500 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain

java.io.IOException: the path component: '/u06/cdhuser/data/hadoop_dfs' is group-writable, and the group is not root.  Its permissions are 0775, and it is owned by gid

501.  Please fix this or select a different socket path.

        at org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native Method)

        at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:189)

        at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:42)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:611)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:577)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:773)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:292)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1893)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1780)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1827)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2003)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2027)

2015-01-16 11:38:30,504 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2015-01-16 11:38:30,505 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down DataNode at zabhb000/10.10.1.127

错误原因:

我的hdfs-site.xml配置

<property>

<name>dfs.domain.socket.path</name>

<value>/u06/cdhuser/data/hadoop_dfs/dn_socket_handler</value>

</property>

而/u06/是/home/目录下的软连接目录,ln -sf 生成


解决方法:  安装hadoop在软连接生成的目录,或者 修改/u06/的权限。

评论

热度(1)