当前位置:   article > 正文

Hadoop数据目录迁移

hadoop数据迁移新目录

Hadoop数据目录迁移

@(Hadoop)


随着数据的不断导入和增大,原本集群部署的目录磁盘空间不足了,所以要把hadoop存储数据的位置迁移到另外一个巨大的磁盘上,另外的一个用意是将数据和程序分离开,以免互相影响。

以下是迁移过程和需要注意的一些地方:

动手之前先把集群停止,如果有hbase也一起停了,因为hbase的存储是依赖于hdfs的,如果没有停止就进行目录迁移hbase会出现错误。

修改配置文件

hadoop最重要的存储数据的配置在core-site.xml文件中设置,修改core-site.xml的hadoop.tmp.dir值为新磁盘的路径即可。

考虑到数据和程序的分离,决定将那些会不断增长的文件都迁移出去,包括:日志文件,pid目录,journal目录。

日志文件和pid目录在hadoop-env.sh中配置,export HADOOP_PID_DIR,HADOOP_LOG_DIR为对应磁盘路径即可。

journal目录在hdfs-site.xml中配置dfs.journalnode.edits.dir

同理,yarn和hbase的log和pid文件路径都可在*_env.sh文件中export设置

改完Hadoop的配置文件之后将其拷贝到hbase/conf目录下

hbase的日志文件和pid目录配置在hbase-daemon.sh的HBASE_PID_DIR,HBASE_LOG_DIR

spark日志文件的pid目录在spark-env.sh的SPARK_PID_DIR,SPARK_LOG_DIR

修改完之后拷贝配置文件到各个子节点。

并将原始数据目录、日志目录和pid目录移动至新磁盘中,重新启动集群,查看输出信息是否正确。

更新

hdfs-site.xml中更新的配置:

  1. <property>
  2. <name>dfs.name.dir</name>
  3. <value>/data2/hadoop/hdfs/name</value>
  4. </property>
  5. <property>
  6. <name>dfs.data.dir</name>
  7. <value>/data2/hadoop/hdfs/data</value>
  8. </property>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

分别是存储hdfs元数据信息和数据的目录,如果没有配置则默认存储到hadoop.tmp.dir中。

格式化hdfs系统之后,hbase启动异常,HMaster自动退出。

日志信息:

  1. 2016-01-15 14:01:38,231 DEBUG [MASTER_SERVER_OPERATIONS-zx-hadoop-210-11:60000-4] master.DeadServer: Finished processing zx-hadoop-210-24,60020,1452828414814
  2. 2016-01-15 14:01:38,231 ERROR [MASTER_SERVER_OPERATIONS-zx-hadoop-210-11:60000-4] executor.EventHandler: Caught throwable while processing event M_SERVER_SHUTDOWN
  3. java.io.IOException: failed log splitting for zx-hadoop-210-24,60020,1452828414814, will retry
  4. at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:322)
  5. at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:202)
  6. at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
  7. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  8. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  9. at java.lang.Thread.run(Thread.java:745)
  10. Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://ns1/hbase/WALs/zx-hadoop-210-24,60020,1452828414814-splitting] Task = installed =
  11. 1 done = 0 error = 0
  12. at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:362)
  13. at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:410)
  14. at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:384)
  15. at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:282)
  16. at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:195)
  17. ... 4 more
  18. 2016-01-15 14:01:38,232 INFO [master:zx-hadoop-210-11:60000-EventThread] zookeeper.ClientCnxn: EventThread shut down
  19. 2016-01-15 14:01:38,232 INFO [master:zx-hadoop-210-11:60000.oldLogCleaner] zookeeper.ZooKeeper: Session: 0x25243ddd648000a closed
  20. 2016-01-15 14:01:38,232 DEBUG [MASTER_SERVER_OPERATIONS-zx-hadoop-210-11:60000-4] master.DeadServer: Finished processing zx-hadoop-210-22,60020,1452828414925
  21. 2016-01-15 14:01:38,233 ERROR [MASTER_SERVER_OPERATIONS-zx-hadoop-210-11:60000-4] executor.EventHandler: Caught throwable while processing event M_SERVER_SHUTDOWN
  22. java.io.IOException: Server is stopped
  23. at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:183)
  24. at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
  25. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  26. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  27. at java.lang.Thread.run(Thread.java:745)
  28. 2016-01-15 14:01:38,338 DEBUG [master:zx-hadoop-210-11:60000] catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@6c4b58f0
  29. 2016-01-15 14:01:38,338 INFO [master:zx-hadoop-210-11:60000] client.HConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x15243ddd6340004
  30. 2016-01-15 14:01:38,343 INFO [master:zx-hadoop-210-11:60000] zookeeper.ZooKeeper: Session: 0x15243ddd6340004 closed
  31. 2016-01-15 14:01:38,343 INFO [master:zx-hadoop-210-11:60000-EventThread] zookeeper.ClientCnxn: EventThread shut down
  32. 2016-01-15 14:01:38,343 INFO [zx-hadoop-210-11,60000,1452837685871.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor: zx-hadoop-210-11,60000,14528
  33. 37685871.splitLogManagerTimeoutMonitor exiting
  34. 2016-01-15 14:01:38,347 INFO [master:zx-hadoop-210-11:60000] zookeeper.ZooKeeper: Session: 0x35243ddd73b0001 closed
  35. 2016-01-15 14:01:38,347 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
  36. 2016-01-15 14:01:38,347 INFO [master:zx-hadoop-210-11:60000] master.HMaster: HMaster main thread exiting
  37. 2016-01-15 14:01:38,350 ERROR [main] master.HMasterCommandLine: Master exiting
  38. java.lang.RuntimeException: HMaster Aborted
  39. at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:192)
  40. at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
  41. at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
  42. at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
  43. at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2785)
  44. Fri Jan 15 14:15:02 CST 2016 Starting master on zx-hadoop-210-11
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44

解决方法

  • 1.切换到zookeeper的bin目录
  • 2.执行$sh zkCli.sh
  1. ls /
  2. rmr /hbase
  3. quit
  • 1
  • 2
  • 3

重启hbase。

作者:@小黑

转载于:https://www.cnblogs.com/jchubby/p/5449357.html

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/627914
推荐阅读
相关标签
  

闽ICP备14008679号