当前位置:   article > 正文

linux_HDFS文件上传后的追加报错_hdfs appendto file 报错

hdfs appendto file 报错

一、背景描述:此时有node1,node2,node3虚拟机

                   HDFS中存在/itcast/1.txt文件

                   linux_node2中/test/下存在2.txt和3.txt

                   现在上传本地2.txt,3.txt文件到HDFS中追加到1.txt文件的末尾

二、显示报错如下:

1.第一报错

  1. [root@node2 ~]# hadoop fs -appendToFile 2.txt 3.txt /itcast/1.txt
  2. 2023-06-06 11:22:37,011 WARN hdfs.DataStreamer: DataStreamer Exception
  3. java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.88.152:9866,DS-c220bd52-1e8d-406a-ac61-8c310e39364f,DISK], DatanodeInfoWithStorage[192.168.88.151:9866,DS-115b8ce0-0944-44f6-8638-e123a08e806f,DISK]], original=[DatanodeInfoWithStorage[192.168.88.151:9866,DS-115b8ce0-0944-44f6-8638-e123a08e806f,DISK], DatanodeInfoWithStorage[192.168.88.152:9866,DS-c220bd52-1e8d-406a-ac61-8c310e39364f,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
  4. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
  5. at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
  6. at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
  7. at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
  8. at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
  9. at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:719)
  10. appendToFile: /root/2.txt

原因:node3的HDFS的从节点失联

解决:可以查看我其他笔记查看这个问题解决思路

2.第二个报错:

  1. [root@node2 ~]# hadoop fs -appendToFile 2.txt 3.txt /itcast/1.txt
  2. appendToFile: Failed to APPEND_FILE /itcast/1.txt for DFSClient_NONMAPREDUCE_-1199279342_1 on 192.168.88.152 because lease recovery is in progress. Try again later.

原因:文件的权限只有读的权限

 

解决:修改HDFS中的1.txt的文件权限

[root@node2 ~]# hadoop fs -chmod 777 /itcast/1.txt

3.第三个报错:

  1. [root@node2 ~]# hadoop fs -appendToFile 2.txt 3.txt /itcast/1.txt
  2. appendToFile: /root/2.txt

原因:发现上传拼接的路径不对,上述命令的路径不是/目录下

解决:查看文件存在的路径

           修改文件上传的路径如下

  1. [root@node2 ~]# ll
  2. total 24
  3. -rw-r--r-- 1 root root 0 Jun 6 08:24 1.txt
  4. -rw-------. 1 root root 1365 Jun 4 11:52 anaconda-ks.cfg
  5. drwxr-xr-x 2 root root 84 Jun 6 08:36 test
  6. -rw-r--r-- 1 root root 19263 Jun 4 22:00 zookeeper.out
  7. [root@node2 ~]# cd test
  8. [root@node2 test]# ll
  9. total 16
  10. -rw-r--r-- 1 root root 2 Jun 6 08:25 1.txt
  11. -rw-r--r-- 1 root root 2 Jun 6 08:26 2.txt
  12. -rw-r--r-- 1 root root 2 Jun 6 08:26 3.txt
  13. -rw-r--r-- 1 root root 6 Jun 6 08:36 merge.txt
  14. [root@node2 test]# hadoop fs -appendToFile 2.txt 3.txt /itcast/1.txt

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小小林熬夜学编程/article/detail/592803
推荐阅读
相关标签
  

闽ICP备14008679号