当前位置:   article > 正文

追加数据到HDFS文件中报错解决办法_the current failed datanode replacement policy is

the current failed datanode replacement policy is default, and a client may

今天在往HDFS文件中追加内容时出现如下错误:

2023-04-14 17:40:51,004 WARN hdfs.DataStreamer: DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.88.101:9866,DS-826962b2-8b47-4388-8083-cde2b36342a2,DISK]], original=[DatanodeInfoWithStorage[192.168.88.101:9866,DS-826962b2-8b47-4388-8083-cde2b36342a2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
        at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1352)
        at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1420)
        at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1646)
        at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1547)
        at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1529)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:717)
appendToFile: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.88.101:9866,DS-826962b2-8b47-4388-8083-cde2b36342a2,DISK]], original=[DatanodeInfoWithStorage[192.168.88.101:9866,DS-826962b2-8b47-4388-8083-cde2b36342a2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

默认replace-datanode-on-failure.policy是DEFAULT,dfs.client.block.write.replace-datanode-on-failure.policy,DEFAULT在3个或以上备份的时候,是会尝试更换结点尝试写入datanode。而在两个备份的时候,不更换datanode,直接开始写。对于3个datanode的集群,只要1个节点没响应写入就会出问题,所以启动另外两个datanode后,问题就成功解决了。

在这里插入图片描述

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小小林熬夜学编程/article/detail/592807
推荐阅读
相关标签
  

闽ICP备14008679号