当前位置:   article > 正文

Cassandra协议错误导致客户端报NoNodeAvailableException错误的解决方案_no node was available to execute the query

no node was available to execute the query

最近发现我们的cassandra客户端程序运行一段时间后会报NoNodeAvailableException的错误,并且无法自行恢复,需要重启客户端才能解决。

而且有些情况下还会与警告:This is likely a gossip or snitch issue, this host will be ignored

  1. com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was available to execute the query
  2. at com.datastax.oss.driver.api.core.NoNodeAvailableException.copy(NoNodeAvailableException.java:40)
  3. at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149)
  4. at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:53)
  5. at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:30)
  6. at com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:230)
  7. at com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:53)

google了一下,没有找到类似问题的解决方案,只能自己解决了。

看了一下报错之前的日志,发现如下日志内容:

  1. 2021-01-15 14:46:11.163 WARN 1 --- [ s0-admin-0] c.d.o.d.internal.core.pool.ChannelPool : [s0|cassandra-contactpoints.default.svc.cluster.local/172.16.21.133:9042] Error while opening new channel (ConnectionInitException: [s0|connecting...] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.6.1, CLIENT_ID=ab768486-6b17-4280-af3d-7ea42acd6913}): failed to send request (io.netty.channel.StacklessClosedChannelException))
  2. 2021-01-15 14:46:13.306 WARN 1 --- [ s0-admin-0] c.d.o.d.internal.core.pool.ChannelPool : [s0|cassandra-contactpoints.default.svc.cluster.local/172.16.21.133:9042] Error while opening new channel (ConnectionInitException: [s0|connecting...] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.6.1, CLIENT_ID=ab768486-6b17-4280-af3d-7ea42acd6913}): failed to send request (io.netty.channel.StacklessClosedChannelException))
  3. 2021-01-15 14:46:16.708 WARN 1 --- [ s0-admin-0] c.d.o.d.internal.core.pool.ChannelPool : [s0|cassandra-contactpoints.default.svc.cluster.local/172.16.21.133:9042] Error while opening new channel (ConnectionInitException: [s0|connecting...] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.6.1, CLIENT_ID=ab768486-6b17-4280-af3d-7ea42acd6913}): failed to send request (io.netty.channel.StacklessClosedChannelException))
  4. 2021-01-15 14:46:28.596 WARN 1 --- [ s0-admin-0] c.d.o.d.internal.core.pool.ChannelPool : [s0|cassandra-contactpoints.default.svc.cluster.local/172.16.21.133:9042] Error while opening new channel (ConnectionInitException: [s0|connecting...] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.6.1, CLIENT_ID=ab768486-6b17-4280-af3d-7ea42acd6913}): failed to send request (java.nio.channels.NotYetConnectedException))

发现172.16.21.133这个节点连接不上,看了一下cassandra集群的状态,发现有个节点重启了,因为我们的cassandra部署在kubernetes集群上,节点重启后会重新分配一个IP,所以导致连接不上,但是我们的集群有三个节点,客户端为什么不连接其他两个节点呢?继续查看客户端日志,发现在客户端启动时有下面这些日志:

  1. 2021-01-14 23:28:39.184 INFO 1 --- [ main] c.d.o.d.internal.core.ContactPoints : Contact point cassandra-contactpoints.default.svc.cluster.local:9042 resolves to multiple addresses, will use them all ([cassandra-contactpoints.default.svc.cluster.local/172.16.11.40, cassandra-contactpoints.default.svc.cluster.local/172.16.20.55, cassandra-contactpoints.default.svc.cluster.local/172.16.21.133])
  2. 2021-01-14 23:28:39.497 INFO 1 --- [ main] c.d.o.d.i.core.DefaultMavenCoordinates : DataStax Java driver for Apache Cassandra(R) (com.datastax.oss:java-driver-core) version 4.6.1
  3. 2021-01-14 23:28:41.420 INFO 1 --- [ s0-admin-0] c.d.oss.driver.internal.core.time.Clock : Using native clock for microsecond precision
  4. 2021-01-14 23:28:41.909 INFO 1 --- [ s0-io-0] c.d.o.d.i.core.channel.ChannelFactory : [s0] Failed to connect with protocol DSE_V2, retrying with DSE_V1
  5. 2021-01-14 23:28:41.919 INFO 1 --- [ s0-io-1] c.d.o.d.i.core.channel.ChannelFactory : [s0] Failed to connect with protocol DSE_V1, retrying with V4
  6. 2021-01-14 23:28:41.930 INFO 1 --- [ s0-io-0] c.d.o.d.i.core.channel.ChannelFactory : [s0] Failed to connect with protocol V4, retrying with V3
  7. 2021-01-14 23:28:41.959 INFO 1 --- [ s0-io-1] c.d.oss.driver.api.core.uuid.Uuids : PID obtained through native call to getpid(): 1
  8. 2021-01-14 23:28:42.066 WARN 1 --- [ s0-io-1] c.d.o.d.i.c.m.DefaultTopologyMonitor : [s0] Found invalid row in system.peers for peer: /172.16.11.82. This is likely a gossip or snitch issue, this node will be ignored.
  9. 2021-01-14 23:28:42.195 INFO 1 --- [ s0-admin-0] c.d.o.d.i.core.session.DefaultSession : [s0] Negotiated protocol version V3 for the initial contact point, but other nodes only support V4, downgrading
  10. 2021-01-14 23:28:42.230 WARN 1 --- [ s0-io-1] c.d.o.d.i.c.m.SchemaAgreementChecker : [s0] Missing schema_version in system.peers row for cd12a2ee-e22f-4f40-9e3c-7cad37c98131, excluding from schema agreement check
  11. 2021-01-14 23:28:42.598 WARN 1 --- [ s0-admin-1] c.d.o.d.internal.core.pool.ChannelPool : [s0|cassandra-contactpoints.default.svc.cluster.local/172.16.11.40:9042] Fatal error while initializing pool, forcing the node down (UnsupportedProtocolVersionException: [cassandra-contactpoints.default.svc.cluster.local/172.16.11.40:9042] Host does not support protocol version V4)
  12. 2021-01-14 23:28:42.616 WARN 1 --- [ s0-admin-0] c.d.o.d.internal.core.pool.ChannelPool : [s0|cassandra-contactpoints.default.svc.cluster.local/172.16.20.55:9042] Fatal error while initializing pool, forcing the node down (UnsupportedProtocolVersionException: [cassandra-contactpoints.default.svc.cluster.local/172.16.20.55:9042] Host does not support protocol version V4)

客户端启动时拿到了三个节点的IP,但是在协议协商时将两个节点剔除了,只留下了172.16.21.133这个节点,这样当172.16.21.133重启后,客户端就彻底联系不上集群了。

我们cassandra集群的节点版本都是一样的,为什么协商协议时会出现v3和v4版本呢?

google了一下关于客户端协议协商的过程,找到了这篇文章DataStax Java Driver - Native protocol,客户端协议协商的过程大致如下:

  1. 连接第一个节点,并协商一个版本,设为版本1
  2. 查询system.peers表,获取其他节点均支持的最高版本,设为版本2
  3. 如果版本2比版本1低,将关闭并重新连接集群,否则,将使用版本1的协议连接剩余节点

一般只有在对cassandra集群进行滚动升级时,才会存在多个版本,上述协商逻辑能够保证客户端使用集群节点均支持的最高版本协议与集群通信,从而保证在升级过程中,客户端仍可以连接所有节点,实现平滑升级。但是我们的客户端日志与这个协商逻辑并不一致,日志中显示,初始连接协商的版本是v3,system.peers表获得的版本是v4,v4大于v3,按照上述逻辑,应该使用v3,但是日志中显示最终使用的协议是v4。是不是客户端代码的逻辑和文档的逻辑有出入?于是我又看了一下客户端代码,在com.datastax.oss.driver.internal.core.session.DefaultSession中找到了相关代码:

  1. private void afterInitialNodeListRefresh(CqlIdentifier keyspace) {
  2. try {
  3. boolean protocolWasForced =
  4. context.getConfig().getDefaultProfile().isDefined(DefaultDriverOption.PROTOCOL_VERSION);
  5. if (!protocolWasForced) {
  6. ProtocolVersion currentVersion = context.getProtocolVersion();
  7. ProtocolVersion bestVersion =
  8. context
  9. .getProtocolVersionRegistry()
  10. .highestCommon(metadataManager.getMetadata().getNodes().values());
  11. if (!currentVersion.equals(bestVersion)) {
  12. LOG.info(
  13. "[{}] Negotiated protocol version {} for the initial contact point, "
  14. + "but other nodes only support {}, downgrading",
  15. logPrefix,
  16. currentVersion,
  17. bestVersion);
  18. context.getChannelFactory().setProtocolVersion(bestVersion);
  19. // Note that, with the default topology monitor, the control connection is already
  20. // connected with currentVersion at this point. This doesn't really matter because none
  21. // of the control queries use any protocol-dependent feature.
  22. // Keep going as-is, the control connection might switch to the "correct" version later
  23. // if it reconnects to another node.
  24. }
  25. }
  26. metadataManager
  27. .refreshSchema(null, false, true)
  28. .whenComplete(
  29. (metadata, error) -> {
  30. if (error != null) {
  31. Loggers.warnWithException(
  32. LOG,
  33. "[{}] Unexpected error while refreshing schema during initialization, "
  34. + "keeping previous version",
  35. logPrefix,
  36. error);
  37. }
  38. afterInitialSchemaRefresh(keyspace);
  39. });
  40. } catch (Throwable throwable) {
  41. initFuture.completeExceptionally(throwable);
  42. }
  43. }

代码中的currentVersion即版本1,bestVersion即版本2,代码中并没有比较这两个版本的大小,只是比较是否相同,如果不同,就使用bestVersion连接所有节点,与文档中的逻辑不一致。

总结一下,客户端实际使用的是n-1个节点均支持的最高版本协议来连接集群,n为集群节点数。这样在对集群进行滚动升级时,如果大部分节点均已升级到高版本,而客户端此时连接集群,客户端将使用高版本协议连接集群,这样,将导致客户端无法连接还未升级的节点,导致cassandra负载不均,可能影响cassandra和所有客户端的稳定性,应该算是客户端的BUG。

回到我们遇到的问题本身,我们的cassandra节点都是3.x版本,使用的协议应该都是v4版本,怎么在协商的过程中会出现v3版本呢?

在日志中还有一条这样的日志“Found invalid row in system.peers for peer: /172.16.11.82. This is likely a gossip or snitch issue, this node will be ignored.”,google了一下,找到这篇文章Why am I getting "Found invalid row in system.peers"?,文中说,节点ip如果发生变化,system.peers表中可能仍然保留旧的IP,需要手动删除。

于是cqlsh连接所有节点看了一下,cqlsh连接后会显示连接的协议版本,其中两个节点显示v3,一个节点显示v4,显示v3的两个节点system.peers表中均存在一条不存在的IP记录,这条记录除了IP,其他字段均是空的,显示v4的,则没有问题,手动删除后重启节点,cqlsh再次连接,协议均变为v4。看来system.peers表不仅影响客户端使用的协议,还影响服务器节点对外提供的协议,但是我并没找到相关的文档支持,暂且认为如此吧。至于在什么情况下system.peers表会出现不存在的IP记录,文档说重启可能会,也没说百分百会,上面我删除记录重启后,确实没有出现这个问题,不知道以后还会不会出现,后续继续观察一下,如果再次出现,不知道节点的协议版本会不会再次变成v3,先这样吧。

解决cassandra集群的问题后,需要重启所有客户端程序,以更新客户端缓存的节点列表。重启之后,检查一下客户端日志,确认协议是v4,并且没有节点被剔除。

总结

问题原因:

  1. system.peers表问题导致两个节点协议版本降级为v3
  2. 客户端协商时使用v4版本连接节点,导致两个节点被剔除出本地缓存的节点列表
  3. 当剩下的一个节点重启并IP发生变动后,客户端将失去与集群的所有连接,并出现NoNodeAvailableException错误

解决方案:

  1. 删除system.peers表中不存在的IP记录并重启节点
  2. 重启所有客户端程序

引用:Cassandra协议错误导致客户端报NoNodeAvailableException错误的解决方案 - 知乎

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/126334
推荐阅读
相关标签
  

闽ICP备14008679号