当前位置:   article > 正文

全网最详细的Hadoop HA集群启动后,两个namenode都是standby的解决办法(图文详解)_dfs.ha.automatic-failover.enabled

dfs.ha.automatic-failover.enabled

多说,直接上干货!

解决办法

因为,如下,我的Hadoop HA集群。

1、首先在hdfs-site.xml中添加下面的参数,该参数的值默认为false:

dfs.ha.automatic-failover.enabled.ns true

2、在core-site.xml文件中添加下面的参数,该参数的值为ZooKeeper服务器的地址,ZKFC将使用该地址。

在HA或者HDFS联盟中,上面的两个参数还需要以NameServiceID为后缀,比如dfs.ha.automatic-failover.enabled.mycluster。除了上面的两个参数外,还有其它几个参数用于自动故障转移,比如ha.zookeeper.session-timeout.ms,但对于大多数安装来说都不是必须的。

在添加了上述的配置参数后,下一步就是在ZooKeeper中初始化要求的状态,可以在任一NameNode中运行下面的命令实现该目的,该命在ZooKeeper中创建znode:

执行该命令需要进入Hadoop的安装目录下面的bin目录中找到hdfs这个命令,输入上面的命令执行,然后就可以修复这个问题了。

注意:之前,先得启动好,每台机器的zookeeper进程。

[kfk@bigdata-pro01 bin]$ pwd
/opt/modules/hadoop-2.6.0/bin
[kfk@bigdata-pro01 bin]$ ./hdfs zkfc -formatZK

复制代码

18/06/16 10:44:28 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=bigdata-pro01.kfk.com:2181,bigdata-pro02.kfk.com:2181,bigdata-pro03.kfk.com:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@20deea7f
18/06/16 10:44:28 INFO zookeeper.ClientCnxn: Opening socket connection to server bigdata-pro01.kfk.com/192.168.80.151:2181. Will not attempt to authenticate using SASL (unknown error)
18/06/16 10:44:28 INFO zookeeper.ClientCnxn: Socket connection established to bigdata-pro01.kfk.com/192.168.80.151:2181, initiating session
18/06/16 10:44:28 INFO zookeeper.ClientCnxn: Session establishment complete on server bigdata-pro01.kfk.com/192.168.80.151:2181, sessionid = 0x164065bc2a90001, negotiated timeout = 5000

The configured parent znode /hadoop-ha/ns already exists.
Are you sure you want to clear all failover information from
ZooKeeper
WARNING: Before proceeding, ensure that all HDFS services and
failover controllers are stopped!

Proceed formatting /hadoop-ha/ns (Y or N) 18/06/16 10:44:28 INFO ha.ActiveStandbyElector: Session connected.
y
18/06/16 10:44:57 INFO ha.ActiveStandbyElector: Recursively deleting /hadoop-ha/ns from ZK…
18/06/16 10:44:57 INFO ha.ActiveStandbyElector: Successfully deleted /hadoop-ha/ns from ZK.
18/06/16 10:44:57 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ns in ZK.
18/06/16 10:44:57 INFO zookeeper.ClientCnxn: EventThread shut down
18/06/16 10:44:57 INFO zookeeper.ZooKeeper: Session: 0x164065bc2a90001 closed
[kfk@bigdata-pro01 bin]$

复制代码

启动并测试

1、先停止掉Hadoop和zookeeper的进程。

2、启动zookeeper进程。

3、开启zkfc进程

[kfk@bigdata-pro01 hadoop-2.6.0]$ pwd
/opt/modules/hadoop-2.6.0
[kfk@bigdata-pro01 hadoop-2.6.0]$ sbin/hadoop-daemon.sh start zkfc
starting zkfc, logging to /opt/modules/hadoop-2.6.0/logs/hadoop-kfk-zkfc-bigdata-pro01.kfk.com.out

4、进入Hadoop的安装目录下面的sbin目录中,找到start-dfs.sh命令可以启动NameNode,当然这里需要你在配置了NameNode主节点的Hadoop节点上面来执行他。

或者,直接sbin/start-all.sh

复制代码

[kfk@bigdata-pro02 hadoop-2.6.0]$ bin/hdfs -help
Usage: hdfs [–config confdir] COMMAND
where COMMAND is one of:
dfs run a filesystem command on the file systems supported in Hadoop.
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
journalnode run the DFS journalnode
zkfc run the ZK Failover Controller daemon
datanode run a DFS datanode
dfsadmin run a DFS admin client
haadmin run a DFS HA admin client
fsck run a DFS filesystem checking utility
balancer run a cluster balancing utility
jmxget get JMX exported values from NameNode or DataNode.
mover run a utility to move block replicas across
storage types
oiv apply the offline fsimage viewer to an fsimage
oiv_legacy apply the offline fsimage viewer to an legacy fsimage
oev apply the offline edits viewer to an edits file
fetchdt fetch a delegation token from the NameNode
getconf get config values from configuration
groups get the groups which users belong to
snapshotDiff diff two snapshots of a directory or diff the
current directory contents with a snapshot
lsSnapshottableDir list all snapshottable dirs owned by the current user
Use -help to see options
portmap run a portmap service
nfs3 run an NFS version 3 gateway
cacheadmin configure the HDFS cache
crypto configure HDFS encryption zones
storagepolicies get all the existing block storage policies
version print the version

Most commands print help when invoked w/o parameters.

复制代码

复制代码

[kfk@bigdata-pro02 hadoop-2.6.0]$
[kfk@bigdata-pro02 hadoop-2.6.0]$ bin/hdfs haadmin -help
Usage: DFSHAAdmin [-ns ]
[-transitionToActive [–forceactive]]
[-transitionToStandby ]
[-failover [–forcefence] [–forceactive] ]
[-getServiceState ]
[-checkHealth ]
[-help ]

Generic options supported are
-conf specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|resourcemanager:port> specify a ResourceManager
-files specify comma separated files to be copied to the map reduce cluster
-libjars specify comma separated jar files to include in the classpath.
-archives specify comma separated archives to be unarchived on the compute machines.

The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]

[kfk@bigdata-pro02 hadoop-2.6.0]$

复制代码

注意,其实自带的命令里,都提供了,若两者都是standby状态怎么执行。若两者都是active状态怎么执行。这里,不多赘述。

如果,还是没解决的话,则

bin/hdfs haadmin -transitionToActive nn1

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小小林熬夜学编程/article/detail/438242
推荐阅读
相关标签
  

闽ICP备14008679号