赞
踩
今天在往HDFS文件中追加内容时出现如下错误:
2023-04-14 17:40:51,004 WARN hdfs.DataStreamer: DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.88.101:9866,DS-826962b2-8b47-4388-8083-cde2b36342a2,DISK]], original=[DatanodeInfoWithStorage[192.168.88.101:9866,DS-826962b2-8b47-4388-8083-cde2b36342a2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1352)
at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1420)
at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1646)
at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1547)
at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1529)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:717)
appendToFile: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.88.101:9866,DS-826962b2-8b47-4388-8083-cde2b36342a2,DISK]], original=[DatanodeInfoWithStorage[192.168.88.101:9866,DS-826962b2-8b47-4388-8083-cde2b36342a2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
默认replace-datanode-on-failure.policy是DEFAULT,dfs.client.block.write.replace-datanode-on-failure.policy,DEFAULT在3个或以上备份的时候,是会尝试更换结点尝试写入datanode。而在两个备份的时候,不更换datanode,直接开始写。对于3个datanode的集群,只要1个节点没响应写入就会出问题,所以启动另外两个datanode后,问题就成功解决了。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。