当前位置:   article > 正文

Kafka 设计之副本和 Leader 选举

Kafka 设计之副本和 Leader 选举

目录

一. 前言

二. 副本(Replication)

三. 复制日志:Quorums,ISRs,和状态机制

四. Unclean leader 选举:如果他们都死了怎么办?

五. 可用性和耐久性保证 (Availability and Durability Guarantees)

六. 副本管理(Replica Management)


一. 前言

    在 Kafka 中,副本分成两类:领导者副本(Leader Replica)和追随者副本(Follower Replica)。每个分区在创建时都要选举一个副本,称为领导者副本,其余的副本自动称为追随者副本。

    Kafka 的副本机制比其他分布式系统要更严格一些。在 Kafka 中,追随者副本是不对外提供服务的。这就是说,任何一个追随者副本都不能响应消费者和生产者的读写请求。所有的请求都必须由领导者副本来处理,或者说,所有的读写请求都必须发往领导者副本所在的 Broker,由该 Broker 负责处理。追随者副本不处理客户端请求,它唯一的任务就是从领导者副本异步拉取消息,并写入到自己的提交日志中,从而实现与领导者副本的同步。

    当领导者副本挂掉了,或者说领导者副本所在的 Broker 宕机时,Kafka 依托于 ZooKeeper 提供的监控功能能够实时感知到,并立即开启新一轮的领导者选举,从追随者副本中选一个作为新的领导者。老 Leader 副本重启回来后,只能作为追随者副本加入到集群中。

二. 副本(Replication)

原文引用:Kafka replicates the log for each topic's partitions across a configurable number of servers (you can set this replication factor on a topic-by-topic basis). This allows automatic failover to these replicas when a server in the cluster fails so messages remain available in the presence of failures.

    Kafka 集群在各个服务器上备份 Topic 分区的日志(就是备份我们的消息,称为副本,你可以设置每个 Topic 的副本数)。当集群中某个服务器发生故障时,自动切换到这些副本,从而保证在故障时消息仍然可用。

原文引用:Other messaging systems provide some replication-related features, but, in our (totally biased) opinion, this appears to be a tacked-on thing, not heavily used, and with large downsides: slaves are inactive, throughput is heavily impacted, it requires fiddly manual configuration, etc. Kafka is meant to be used with replication by default—in fact we implement un-replicated topics as replicated topics where the replication factor is one.

    其他消息系统提供一些副本相关的功能,但是,在我们看来(有偏见),这似乎是一个附加的东西,没有被大量的使用,而且有很大缺点:Slave 不活跃,吞吐量受到严重影响,需要繁琐的手动配置等。Kafka 默认启用副本 —— 就是不需要副本的 Topic 的副本数就是1。

原文引用:The unit of replication is the topic partition. Under non-failure conditions, each partition in Kafka has a single leader and zero or more followers. The total number of replicas including the leader constitute the replication factor. All reads and writes go to the leader of the partition. Typically, there are many more partitions than brokers and the leaders are evenly distributed among brokers. The logs on the followers are identical to the leader's log—all have the same offsets and messages in the same order (though, of course, at any given time the leader may have a few as-yet unreplicated messages at the end of its log).

    副本以 Topic 的分区为单位。正常情况下,Kafka 每个分区都有一个单独的 Leader,0个或多个Follower。副本的总数包括 Leader。所有的读取和写入都转到该分区的 Leader 上。通常,分区数比 Broker 多,Leader 均匀分布在 Broker。Follower 的日志与 Leader 的日志完全相同 —— 都有相同的偏移量和相同顺序的消息(当然,在任何一个时间点上,Leader 比 Follower 多几条消息,尚未同步到 Follower)。

原文引用:Followers consume messages from the leader just as a normal Kafka consumer would and apply them to their own log. Having the followers pull from the leader has the nice property of allowing the follower to naturally batch together log entries they are applying to their log.

    Followers 作为普通的消费者从 Leader 中消费消息并应用到自己的日志中。并允许 Follower 从Leader 拉取批量日志应用到自己的日志,这样具有良好的性能。

原文引用:As with most distributed systems, automatically handling failures requires a precise definition of what it means for a node to be "alive." In Kafka, a special node known as the "controller" is responsible for managing the registration of brokers in the cluster. Broker liveness has two conditions:

  1. Brokers must maintain an active session with the controller in order to receive regular metadata updates.
  2. Brokers acting as followers must replicate the writes from the leader and not fall "too far" behind. 

    和大多数分布式系统一样,自动处理失败的节点。需要精确定义什么样的节点是“活着”的,在 Kafka 中,一个被称为“控制器”的特殊节点负责管理集群中 Broker 的注册。节点活着有2个条件:

  1. 一个节点必须能维持与 ZooKeeper 的会话(通过 ZooKeeper 的心跳机制)。
  2. 如果它是一个 Slave,它必须复制 Leader 并且不能落后“太多”。

原文引用:We refer to nodes satisfying these two conditions as being "in sync" to avoid the vagueness of "alive" or "failed". The leader keeps track of the set of "in sync" nodes. If a follower dies, gets stuck, or falls behind, the leader will remove it from the list of in sync replicas. The definition of, how far behind is too far, is controlled by the replica.lag.max.messages configuration and the definition of a stuck replica is controlled by the replica.lag.time.max.ms configuration. 

    我们让节点满足这2个“同步”条件,以区分“活着”还是“故障”。Leader 跟踪“同步”节点。如果一个Follower 死掉,卡住,或落后,Leader 将从同步副本列表中移除它。落后是通过replica.lag.max.messages 配置控制,卡住是通过 replica.lag.time.max.ms 配置控制的。

原文引用:In distributed systems terminology we only attempt to handle a "fail/recover" model of failures where nodes suddenly cease working and then later recover (perhaps without knowing that they have died). Kafka does not handle so-called "Byzantine" failures in which nodes produce arbitrary or malicious responses (perhaps due to bugs or foul play).

    在分布式系统中,我们只是尝试处理故障节点突然停止工作和稍后恢复的“故障/恢复”模式(也许不知道它们已经故障了)。Kafka 不处理节点产生任意或恶意的响应(也许是因为 Bug 或 犯规),即所谓的 Byzantine(拜占庭式)故障。

原文引用:We can now more precisely define that a message is considered committed when all in sync replicas for that partition have applied it to their log. Only committed messages are ever given out to the consumer. This means that the consumer need not worry about potentially seeing a message that could be lost if the leader fails. Producers, on the other hand, have the option of either waiting for the message to be committed or not, depending on their preference for tradeoff between latency and durability. This preference is controlled by the acks setting that the producer uses. Note that topics have a setting for the "minimum number" of in-sync replicas that is checked when the producer requests acknowledgment that a message has been written to the full set of in-sync replicas. If a less stringent acknowledgement is requested by the producer, then the message can be committed, and consumed, even if the number of in-sync replicas is lower than the minimum (e.g. it can be as low as just the leader).

    我们现在可以更精确的定义,当该分区的所有同步副本已经写入到其日志中时,该消息视为“已提交”。只有“已提交”的消息才会给到消费者。所有消费者无需担心如果 Leader 故障,会消费到丢失的消息。另一方面,生产者可以选择等待消息提交或不提交,这取决于你更偏向延迟或持久性(通过 acks 控制)。当生产者请求确保消息已经写入到全部的同步副本中(可以通过设置 Topic同步副本的“最小数”)。如果生产者要求不严格,那么即使同步副本的数量低于最小值,也可以提交和消费该消息。

原文引用:The guarantee that Kafka offers is that a committed message will not be lost, as long as there is at least one in sync replica alive, at all times.

    Kafka 提供的保证是,在任何时候,只要至少有一个同步副本活着,提交的消息就不会丢失。

原文引用:Kafka will remain available in the presence of node failures after a short fail-over period, but may not remain available in the presence of network partitions.

    在短暂的故障转移期间,Kafka 在出现节点故障时仍然可用。但在出现网络分区时可能不可用。

三. 复制日志:Quorums,ISRs,和状态机制

Quorum:原指为了处理事务、拥有做出决定的权力而必须出席的众议员或参议员的数量(一般指半数以上)。

原文引用:At its heart a Kafka partition is a replicated log. The replicated log is one of the most basic primitives in distributed data systems, and there are many approaches for implementing one. A replicated log can be used by other systems as a primitive for implementing other distributed systems in the state-machine style. 

    Kafka 分区的核心是一个副本日志,副本是分布式数据系统中最基础原始的功能之一。并有许多方法可以实现,副本日志可以被其他系统用作状态机类型实现其他分布式系统的原始功能。

原文引用:A replicated log models the process of coming into consensus on the order of a series of values (generally numbering the log entries 0, 1, 2, ...). There are many ways to implement this, but the simplest and fastest is with a leader who chooses the ordering of values provided to it. As long as the leader remains alive, all followers need to only copy the values and ordering, the leader chooses.

    副本日志模拟了对一系列值顺序进入的过程(通常日志编号是 0,1,2,……)。有很多方法可以实现这一点,但最简单和最快的是 Leader 提供选择的排序值,只要 Leader 活着,所有的Followers 只需要复制 Leader 选择的值和顺序。

原文引用:Of course if leaders didn't fail we wouldn't need followers! When the leader does die we need to choose a new leader from among the followers. But followers themselves may fall behind or crash so we must ensure we choose an up-to-date follower. The fundamental guarantee a log replication algorithm must provide is that if we tell the client a message is committed, and the leader fails, the new leader we elect must also have that message. This yields a tradeoff: if the leader waits for more followers to acknowledge a message before declaring it committed then there will be more potentially electable leaders.

    当然,如果 Leader 没有故障,我们就不需要 Follower!当 Leader 确实故障了,我们需要从Follower 中选出新的 Leader,但是 Follower 自己可能会落后或崩溃,所以我们必须选择一个最新的 Follower。日志复制算法必须提供保证,如果我们告诉客户端消息是已提交,Leader 故障了,我们选举的新的 Leader 必须要有这条消息,这就产生一个权衡:如果 Leader 在声明已提交之前等待更多的 Follwer 确认消息的话,将会有更多有资格的 Leader。

原文引用:If you choose the number of acknowledgements required and the number of logs that must be compared to elect a leader such that there is guaranteed to be an overlap, then this is called a Quorum. 

    如果你选择所需的确认数量必须和日志的数量进行比较,才能选出一个 Leader,从而保证有重叠,那么这就是所谓的 Quorum(法定人数)。

原文引用:A common approach to this tradeoff is to use a majority vote for both the commit decision and the leader election. This is not what Kafka does, but let's explore it anyway to understand the tradeoffs. Let's say we have 2f+1 replicas. If f+1 replicas must receive a message prior to a commit being declared by the leader, and if we elect a new leader by electing the follower with the most complete log from at least f+1 replicas, then, with no more than f failures, the leader is guaranteed to have all committed messages. This is because among any f+1 replicas, there must be at least one replica that contains all committed messages. That replica's log will be the most complete and therefore will be selected as the new leader. There are many remaining details that each algorithm must handle (such as precisely defined what makes a log more complete, ensuring log consistency during leader failure or changing the set of servers in the replica set) but we will ignore these for now.

    一种常见的方法是,用多数投票决定 Leader 选举。Kafka 不是这样做的,但先让我们了解下这个权衡,假如,我们有 2f+1 个副本,如果 f+1 个副本在 Leader 声明提交之前必须收到消息,并且如果我们选举新的 Leader,至少从 f+1 个副本选出最完整日志的 Follwer,并且不大于 f 的失败的情况下,Leader 保证拥有所有已提交的消息。这是因为在任何 f+1 副本中,必须至少有一个副本包含所有已提交的消息。该副本的日志将是最完整的,因此选定为新的 Leader。每个算法都必须处理许多剩余的细节(例如,精确定义是什么使一个日志更加完整,Leader 故障期间或更改服务器的副本集时确保日志一致性),但我们现在不讲这些。

原文引用:This majority vote approach has a very nice property: the latency is dependent on only the fastest servers. That is, if the replication factor is three, the latency is determined by the faster slave not the slower one.

    这种投票表决的方式有一个非常好的特性:仅依赖速度最快的服务器,也就是说,如果复制因子为三个,由最快的一个来确定。 

原文引用:There are a rich variety of algorithms in this family including ZooKeeper's Zab, Raft, and Viewstamped Replication. The most similar academic publication we are aware of to Kafka's actual implementation is PacificA from Microsoft. 

    有各种丰富的算法,包括 ZooKeeper 的 Zab、 Raft 和 Viewstamped Replication。与 Kafka 的实现最相似的学术理论是微软的 PacificA。

原文引用:The downside of majority vote is that it doesn't take many failures to leave you with no electable leaders. To tolerate one failure requires three copies of the data, and to tolerate two failures requires five copies of the data. In our experience having only enough redundancy to tolerate a single failure is not enough for a practical system, but doing every write five times, with 5x the disk space requirements and 1/5th the throughput, is not very practical for large volume data problems. This is likely why quorum algorithms more commonly appear for shared cluster configuration such as ZooKeeper but are less common for primary data storage. For example in HDFS the namenode's high-availability feature is built on a majority-vote-based journal, but this more expensive approach is not used for the data itself.

    多数投票的缺点是,故障数还不太多的情况下会让你没有候选人可选,要容忍1个故障需要3个数据副本,容忍2个故障需要5个数据副本。根据我们的经验,对于一个实用的系统来说,只能容忍单个故障的冗余是不够的,但是如果5个数据副本,每个写5次,5倍的磁盘空间要求,吞吐量是原来的1/5,这对于大数据量系统是不实用的,这可能是 Quorum 算法更常见于共享集群配置(如ZooKeeper),而不太常见于数据存储的原因。例如,在 HDFS 中, namenode 的高可用性是建立在 majority-vote-based 上的,但这种更昂贵的方法不能用于数据本身。

原文引用:Kafka takes a slightly different approach to choosing its quorum set. Instead of majority vote, Kafka dynamically maintains a set of in-sync replicas (ISR) that are caught-up to the leader. Only members of this set are eligible for election as leader. A write to a Kafka partition is not considered committed until all in-sync replicas have received the write. This ISR set is persisted to ZooKeeper whenever it changes. Because of this, any replica in the ISR is eligible to be elected leader. This is an important factor for Kafka's usage model where there are many partitions and ensuring leadership balance is important. With this ISR model and f+1 replicas, a Kafka topic can tolerate f failures without losing committed messages. 

    Kafka 选择 Quorum 时采用了一种略有不同的方法。Kafka 动态地维护一组同步副本(ISR),而不是多数投票,只有这个组的成员才有资格当选 Leader。对 Kafka 分区的写入,在所有同步副本都收到写入之前不会被视为已提交。无论何时更改,这组 ISR 都会持久化到 ZooKeeper,正因为如此,在 ISR 中的任何副本都有资格当选 Leader,这是 Kafka 使用模型的一个重要因素,因为Kafka 有很多分区,确保 Leader 的平衡很重要的。有了这个 ISR 模型和 f+1 个副本,Kafka 的主题可以容忍 f 次失败而不会丢失已提交的消息。

原文引用:For most use cases we hope to handle, we think this tradeoff is a reasonable one. In practice, to tolerate f failures, both the majority vote and the ISR approach will wait for the same number of replicas to acknowledge before committing a message (e.g. to survive one failure a majority quorum needs three replicas and one acknowledgement and the ISR approach requires two replicas and one acknowledgement). The ability to commit without the slowest servers is an advantage of the majority vote approach. However, we think it is ameliorated by allowing the client to choose whether they block on the message commit or not, and the additional throughput and disk space due to the lower required replication factor is worth it.

    对于大多数情况下,我们希望这么处理,我们认为这种权衡是合理的。在实践中,容忍 f 次故障,多数投票和 ISR 方法都将等待相同数量的副本进行确认,然后再提交消息(例如:为了在一次故障中幸存下来,多数 Quorum 需要三个副本和一个确认,ISR 方法需要两个副本和两个确认)。排除最慢的服务器是多数投票的优点,但是,我们认为允许客户选择是否阻塞消息的提交可以改善这个问题,并通过降低复制因子获得额外的吞吐量和磁盘空间也是值得的。

原文引用:Another important design distinction is that Kafka does not require that crashed nodes recover with all their data intact. It is not uncommon for replication algorithms in this space to depend on the existence of "stable storage" that cannot be lost in any failure-recovery scenario without potential consistency violations. There are two primary problems with this assumption. First, disk errors are the most common problem we observe in real operation of persistent data systems and they often do not leave data intact. Secondly, even if this were not a problem, we do not want to require the use of fsync on every write for our consistency guarantees as this can reduce performance by two to three orders of magnitude. Our protocol for allowing a replica to rejoin the ISR ensures that before rejoining, it must fully re-sync again even if it lost unflushed data in its crash. 

    另一个重要的设计区别是,Kafka 不要求节点崩溃后所有的数据都完好无损。不违反一致性,在任何故障恢复场景都不丢失的“稳定存储”复制算法是极少的。这种假设主要有两个问题。首先,根据我们的观察,磁盘错误在持久化数据系统是最常见的问题,通常数据不会完好无损。其次,即使这不是问题,我们不希望在每次写入时都用 fsync 来保证一致性。因为这会将性能降低两到三个数量级。我们允许一个副本重新加入 ISR 协议确保在加入之前,即使它在崩溃中丢失了未刷新的数据,它也必须再次完全重新同步。

四. Unclean leader 选举:如果他们都死了怎么办?

原文引用:Note that Kafka's guarantee with respect to data loss is predicated on at least on replica remaining in sync. If all the nodes replicating a partition die, this guarantee no longer holds.

    请注意,Kafka 对数据丢失的保障是基于至少有一个副本在保持同步。如果分区的所有复制节点都死了,这保证就不再成立。

原文引用:However a practical system needs to do something reasonable when all the replicas die. If you are unlucky enough to have this occur, it is important to consider what will happen. There are two behaviors that could be implemented:

  1. Wait for a replica in the ISR to come back to life and choose this replica as the leader (hopefully it still has all its data).
  2. Choose the first replica (not necessarily in the ISR) that comes back to life as the leader.

如果你人品超差,遇到所有的副本都死了,这时候,你要考虑将会发生问题,并做重要的2件事:

  1. 等待在 ISR 中的副本起死回生并选择该副本作为 Leader(希望它仍有所有数据)。
  2. 选择第一个复活的副本 (不一定在 ISR 中),作为 Leader。

原文引用:This is a simple tradeoff between availability and consistency. If we wait for replicas in the ISR, then we will remain unavailable as long as those replicas are down. If such replicas were destroyed or their data was lost, then we are permanently down. If, on the other hand, a non-in-sync replica comes back to life and we allow it to become leader, then its log becomes the source of truth even though it is not guaranteed to have every committed message. In our current release we choose the second strategy and favor choosing a potentially inconsistent replica when all replicas in the ISR are dead.This behavior can be disabled using configuration property unclean.leader.election.enable, to support use cases where downtime is preferable to inconsistency.

    这是在可用性和一致性的简单权衡。如果我们在 ISR 中等待副本,那么只要副本不可用,我们就会一直不可用。如果这些副本被摧毁或数据已经丢失,那么就是永久的不可用。另一方面,如果non-in-sync(非同步)的副本复活了,我们让它成为 Leader,让它的日志成为源,即使它不能保证拥有所有提交的消息。从0.11.0.0版本开始,默认情况下,Kafka 会选择第一种策略,并倾向于等待一致性的副本。这个行为可以使用配置属性 unclean.leader.election.enable 来切换第二种方式,支持在 ISR 中所有副本死了的时候选择不能保证一致的副本,正常运行优于一致性。

    在我们当前的版本中,我们选择第2种方式,支持选择在 ISR 中所有副本死了的时候选择不能保证一致的副本。可以通过配置 unclean.leader.election.enable 禁用此行为,以支持停机优先于不一致。

原文引用:This dilemma is not specific to Kafka. It exists in any quorum-based scheme. For example in a majority voting scheme, if a majority of servers suffer a permanent failure, then you must either choose to lose 100% of your data or violate consistency by taking what remains on an existing server as your new source of truth.

    这个难题不是只有 Kafka 有,任何基于 Quorum 的都有。例如在多数投票中,如果多数服务器都出现永久性的故障,那么你必须选择丢失100%的数据,或违反一致性,用剩下现有服务器作为新的源。

五. 可用性和耐久性保证 (Availability and Durability Guarantees)

原文引用:When writing to Kafka, producers can choose whether they wait for the message to be acknowledged by 0,1 or all (-1) replicas. Note that "acknowledgement by all replicas" does not guarantee that the full set of assigned replicas have received the message. By default, when acks=all, acknowledgement happens as soon as all the current in-sync replicas have received the message. For example, if a topic is configured with only two replicas and one fails (i.e., only one in sync replica remains), then writes that specify acks=all will succeed. However, these writes could be lost if the remaining replica also fails. Although this ensures maximum availability of the partition, this behavior may be undesirable to some users who prefer durability over availability. Therefore, we provide two topic-level configurations that can be used to prefer message durability over availability:

    当写入到 Kafka 时,生产者可以选择是否等待0,1 或 全部副本(-1)的消息确认。需要注意的是“所有副本确认”并不能保证全部分配副本已收到消息。默认情况下,当 acks=all 时,只要当前所有在同步中的副本收到消息,就会进行确认。例如:如果一个 Topic 有2个副本,有一个故障(即,只剩下一个同步副本),即使写入是 acks=all 也将会成功。如果剩下的副本也故障了,那么这些写入就会丢失。虽然这可以确保分区的最大可用性,这种方式可能不受欢迎,一些用户喜欢耐久性超过可用性。因此,我们提供两种 Topic 级配置,可用于更喜欢消息持久性而非可用性的场景:

原文引用:Disable unclean leader election - if all replicas become unavailable, then the partition will remain unavailable until the most recent leader becomes available again. This effectively prefers unavailability over the risk of message loss. See the previous section on Unclean Leader Election for clarification.

    禁用 unclean leader 选举 —— 如果所有副本不可用,则分区将一直不可用,直到最近的 Leader 再次变得可用,这种宁愿不可用,也不冒着丢失消息的风险。请参阅上一节“Unclean leader 选举”以获得澄清。

原文引用:Specify a minimum ISR size - the partition will only accept writes if the size of the ISR is above a certain minimum, in order to prevent the loss of messages that were written to just a single replica, which subsequently becomes unavailable. This setting only takes effect if the producer uses required.acks=-1 and guarantees that the message will be acknowledged by at least this many in-sync replicas. This setting offers a trade-off between consistency and availability. A higher setting for minimum ISR size guarantees better consistency since the message is guaranteed to be written to more replicas which reduces the probability that it will be lost. However, it reduces availability since the partition will be unavailable for writes if the number of in-sync replicas drops below the minimum threshold.

    指定最小 ISR 大小 —— 只有当 ISR 的大小高于最小值时,分区才接受写入,以防止消息仅写到单个副本上的消息丢失,而让其变得不可用。如果生产者使用的是 acks=-1 并保证最少这些同步副本已确认,此设置才生效。该设置提供一致性和可用性之间的权衡。ISR 的大小设置的越高则能更好的保证一致性,因为消息写到更多的副本以减少消息丢失的风险。但是,这样降低了可用性,因为如果同步副本数低于最小的阈值,则该分区将不可写入。

六. 副本管理(Replica Management)

原文引用:The above discussion on replicated logs really covers only a single log, i.e. one topic partition. However a Kafka cluster will manage hundreds or thousands of these partitions. We attempt to balance partitions within a cluster in a round-robin fashion to avoid clustering all partitions for high-volume topics on a small number of nodes. Likewise we try to balance leadership so that each node is the leader for a proportional share of its partitions.

    上面讨论的复制日志实际上只涉及单个日志,即一个 Topic 的分区。然而,Kafka 集群需要管理成百上千的分区,我们试图用循环的方式在集群内平衡分区,以避免将大容量 Topic 的所有分区集中在少数几个节点上。同样,我们尽量使每个节点都是其分区按比例分担平衡的 Leader。

原文引用:It is also important to optimize the leadership election process as that is the critical window of unavailability. A naive implementation of leader election would end up running an election per partition for all partitions a node hosted when that node failed. Instead, we elect one of the brokers as the "controller". This controller detects failures at the broker level and is responsible for changing the leader of all affected partitions in a failed broker. The result is that we are able to batch together many of the required leadership change notifications which makes the election process far cheaper and faster for a large number of partitions. If the controller fails, one of the surviving brokers will become the new controller.

    同样重要的是优化 Leader 选举的过程,因为这是不可用的关键窗口。一个幼稚的实现是当节点故障时,Leader 将在运行中的所有分区中选举一个节点来托管。相反的,我们选出一个 Broker 作为“控制器”。这个控制器检查 Broker 级别的故障和负责改变所有故障的 Broker 中受影响的 Leader的分区。这样的好处是,我们能够批量处理多个需要 Leader 变更的分区,这使得选举更廉价、更快速。如果控制器发生故障,在幸存的 Broker 之中,将选举一个成为新的控制器。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/362071
推荐阅读
相关标签
  

闽ICP备14008679号