赞
踩
clickhouse-operator创建、配置和管理在kubernetes上运行的clickhouse集群。ck-operator提供如下功能:
主要分为3步骤:安装clickhouse-operator,k8s上部署zookeeper,配置clickhouse集群
创建命名空间,命名空间保证了操作的隔离性,假设用户A和用户B分别创建了各自的命名空间,这样即使你们的服务的名字一样也不会冲突。
kubectl create ckk8s
1)创建clickhouse-operator-install.yaml文件
vim clickhouse-operator-install.yaml
内容如下:
- apiVersion: apiextensions.k8s.io/v1beta1
- kind: CustomResourceDefinition
- metadata:
- name: clickhouseinstallations.clickhouse.altinity.com
- spec:
- group: clickhouse.altinity.com
- version: v1
- scope: Namespaced
- names:
- kind: ClickHouseInstallation
- singular: clickhouseinstallation
- plural: clickhouseinstallations
- shortNames:
- - chi
- additionalPrinterColumns:
- - name: version
- type: string
- description: Operator version
- priority: 1 # show in wide view
- JSONPath: .status.version
- - name: clusters
- type: integer
- description: Clusters count
- priority: 0 # show in standard view
- JSONPath: .status.clusters
- - name: shards
- type: integer
- description: Shards count
- priority: 1 # show in wide view
- JSONPath: .status.shards
- - name: hosts
- type: integer
- description: Hosts count
- priority: 0 # show in standard view
- JSONPath: .status.hosts
- - name: taskID
- type: string
- description: TaskID
- priority: 1 # show in wide view
- JSONPath: .status.taskID
- - name: status
- type: string
- description: CHI status
- priority: 0 # show in standard view
- JSONPath: .status.status
- - name: updated
- type: integer
- description: Updated hosts count
- priority: 1 # show in wide view
- JSONPath: .status.updated
- - name: added
- type: integer
- description: Added hosts count
- priority: 1 # show in wide view
- JSONPath: .status.added
- - name: deleted
- type: integer
- description: Hosts deleted count
- priority: 1 # show in wide view
- JSONPath: .status.deleted
- - name: delete
- type: integer
- description: Hosts to be deleted count
- priority: 1 # show in wide view
- JSONPath: .status.delete
- - name: endpoint
- type: string
- description: Client access endpoint
- priority: 1 # show in wide view
- JSONPath: .status.endpoint
- # TODO return to this feature later
- # Pruning unknown fields. FEATURE STATE: Kubernetes v1.15
- # Probably full specification may be needed
- # preserveUnknownFields: false
- validation:
- openAPIV3Schema:
- type: object
- properties:
- spec:
- type: object
- x-kubernetes-preserve-unknown-fields: true
- properties:
- taskID:
- type: string
- # Need to be StringBool
- stop:
- type: string
- enum:
- # List StringBoolXXX constants from model
- - ""
- - "0"
- - "1"
- - "False"
- - "false"
- - "True"
- - "true"
- - "No"
- - "no"
- - "Yes"
- - "yes"
- - "Off"
- - "off"
- - "On"
- - "on"
- - "Disable"
- - "disable"
- - "Enable"
- - "enable"
- - "Disabled"
- - "disabled"
- - "Enabled"
- - "enabled"
- # Need to be StringBool
- troubleshoot:
- type: string
- enum:
- # List StringBoolXXX constants from model
- - ""
- - "0"
- - "1"
- - "False"
- - "false"
- - "True"
- - "true"
- - "No"
- - "no"
- - "Yes"
- - "yes"
- - "Off"
- - "off"
- - "On"
- - "on"
- - "Disable"
- - "disable"
- - "Enable"
- - "enable"
- - "Disabled"
- - "disabled"
- - "Enabled"
- - "enabled"
- namespaceDomainPattern:
- type: string
- templating:
- type: object
- nullable: true
- properties:
- policy:
- type: string
- reconciling:
- type: object
- nullable: true
- properties:
- policy:
- type: string
- configMapPropagationTimeout:
- type: integer
- minimum: 0
- maximum: 3600
- cleanup:
- type: object
- nullable: true
- properties:
- unknownObjects:
- type: object
- nullable: true
- properties:
- statefulSet:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- pvc:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- configMap:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- service:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- reconcileFailedObjects:
- type: object
- nullable: true
- properties:
- statefulSet:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- pvc:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- configMap:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- service:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- defaults:
- type: object
- nullable: true
- properties:
- # Need to be StringBool
- replicasUseFQDN:
- type: string
- enum:
- # List StringBoolXXX constants from model
- - ""
- - "0"
- - "1"
- - "False"
- - "false"
- - "True"
- - "true"
- - "No"
- - "no"
- - "Yes"
- - "yes"
- - "Off"
- - "off"
- - "On"
- - "on"
- - "Disable"
- - "disable"
- - "Enable"
- - "enable"
- - "Disabled"
- - "disabled"
- - "Enabled"
- - "enabled"
- distributedDDL:
- type: object
- nullable: true
- properties:
- profile:
- type: string
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- configuration:
- type: object
- nullable: true
- properties:
- zookeeper:
- type: object
- nullable: true
- properties:
- nodes:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - host
- properties:
- host:
- type: string
- port:
- type: integer
- minimum: 0
- maximum: 65535
- session_timeout_ms:
- type: integer
- operation_timeout_ms:
- type: integer
- root:
- type: string
- identity:
- type: string
- users:
- type: object
- nullable: true
- profiles:
- type: object
- nullable: true
- quotas:
- type: object
- nullable: true
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- clusters:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- properties:
- name:
- type: string
- minLength: 1
- # See namePartClusterMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- zookeeper:
- type: object
- nullable: true
- properties:
- nodes:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - host
- properties:
- host:
- type: string
- port:
- type: integer
- minimum: 0
- maximum: 65535
- session_timeout_ms:
- type: integer
- operation_timeout_ms:
- type: integer
- root:
- type: string
- identity:
- type: string
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- layout:
- type: object
- nullable: true
- properties:
- # DEPRECATED - to be removed soon
- type:
- type: string
- shardsCount:
- type: integer
- replicasCount:
- type: integer
- shards:
- type: array
- nullable: true
- items:
- type: object
- properties:
- name:
- type: string
- minLength: 1
- # See namePartShardMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- # DEPRECATED - to be removed soon
- definitionType:
- type: string
- weight:
- type: integer
- # Need to be StringBool
- internalReplication:
- type: string
- enum:
- # List StringBoolXXX constants from model
- - ""
- - "0"
- - "1"
- - "False"
- - "false"
- - "True"
- - "true"
- - "No"
- - "no"
- - "Yes"
- - "yes"
- - "Off"
- - "off"
- - "On"
- - "on"
- - "Disable"
- - "disable"
- - "Enable"
- - "enable"
- - "Disabled"
- - "disabled"
- - "Enabled"
- - "enabled"
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- replicasCount:
- type: integer
- minimum: 1
- replicas:
- type: array
- nullable: true
- items:
- # Host
- type: object
- properties:
- name:
- type: string
- minLength: 1
- # See namePartReplicaMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- tcpPort:
- type: integer
- minimum: 1
- maximum: 65535
- httpPort:
- type: integer
- minimum: 1
- maximum: 65535
- interserverHttpPort:
- type: integer
- minimum: 1
- maximum: 65535
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- replicas:
- type: array
- nullable: true
- items:
- type: object
- properties:
- name:
- type: string
- minLength: 1
- # See namePartShardMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTeampate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- shardsCount:
- type: integer
- minimum: 1
- shards:
- type: array
- nullable: true
- items:
- # Host
- type: object
- properties:
- name:
- type: string
- minLength: 1
- # See namePartReplicaMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- tcpPort:
- type: integer
- minimum: 1
- maximum: 65535
- httpPort:
- type: integer
- minimum: 1
- maximum: 65535
- interserverHttpPort:
- type: integer
- minimum: 1
- maximum: 65535
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- templates:
- type: object
- nullable: true
- properties:
- hostTemplates:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- properties:
- name:
- type: string
- portDistribution:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - type
- properties:
- type:
- type: string
- enum:
- # List PortDistributionXXX constants
- - ""
- - "Unspecified"
- - "ClusterScopeIndex"
- spec:
- # Host
- type: object
- properties:
- name:
- type: string
- minLength: 1
- # See namePartReplicaMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- tcpPort:
- type: integer
- minimum: 1
- maximum: 65535
- httpPort:
- type: integer
- minimum: 1
- maximum: 65535
- interserverHttpPort:
- type: integer
- minimum: 1
- maximum: 65535
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
-
- podTemplates:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- properties:
- name:
- type: string
- generateName:
- type: string
- zone:
- type: object
- #required:
- # - values
- properties:
- key:
- type: string
- values:
- type: array
- nullable: true
- items:
- type: string
- distribution:
- # DEPRECATED
- type: string
- enum:
- - ""
- - "Unspecified"
- - "OnePerHost"
- podDistribution:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - type
- properties:
- type:
- type: string
- enum:
- # List PodDistributionXXX constants
- - ""
- - "Unspecified"
- - "ClickHouseAntiAffinity"
- - "ShardAntiAffinity"
- - "ReplicaAntiAffinity"
- - "AnotherNamespaceAntiAffinity"
- - "AnotherClickHouseInstallationAntiAffinity"
- - "AnotherClusterAntiAffinity"
- - "MaxNumberPerNode"
- - "NamespaceAffinity"
- - "ClickHouseInstallationAffinity"
- - "ClusterAffinity"
- - "ShardAffinity"
- - "ReplicaAffinity"
- - "PreviousTailAffinity"
- - "CircularReplication"
- scope:
- type: string
- enum:
- # list PodDistributionScopeXXX constants
- - ""
- - "Unspecified"
- - "Shard"
- - "Replica"
- - "Cluster"
- - "ClickHouseInstallation"
- - "Namespace"
- number:
- type: integer
- minimum: 0
- maximum: 65535
- spec:
- # TODO specify PodSpec
- type: object
- nullable: true
- volumeClaimTemplates:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- # - spec
- properties:
- name:
- type: string
- reclaimPolicy:
- type: string
- enum:
- - ""
- - "Retain"
- - "Delete"
- metadata:
- type: object
- nullable: true
- spec:
- # TODO specify PersistentVolumeClaimSpec
- type: object
- nullable: true
- serviceTemplates:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- # - spec
- properties:
- name:
- type: string
- generateName:
- type: string
- metadata:
- # TODO specify ObjectMeta
- type: object
- nullable: true
- spec:
- # TODO specify ServiceSpec
- type: object
- nullable: true
- useTemplates:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- properties:
- name:
- type: string
- namespace:
- type: string
- useType:
- type: string
- enum:
- # List useTypeXXX constants from model
- - ""
- - "merge"
- ---
- apiVersion: apiextensions.k8s.io/v1beta1
- kind: CustomResourceDefinition
- metadata:
- name: clickhouseinstallationtemplates.clickhouse.altinity.com
- spec:
- group: clickhouse.altinity.com
- version: v1
- scope: Namespaced
- names:
- kind: ClickHouseInstallationTemplate
- singular: clickhouseinstallationtemplate
- plural: clickhouseinstallationtemplates
- shortNames:
- - chit
- additionalPrinterColumns:
- - name: version
- type: string
- description: Operator version
- priority: 1 # show in wide view
- JSONPath: .status.version
- - name: clusters
- type: integer
- description: Clusters count
- priority: 0 # show in standard view
- JSONPath: .status.clusters
- - name: shards
- type: integer
- description: Shards count
- priority: 1 # show in wide view
- JSONPath: .status.shards
- - name: hosts
- type: integer
- description: Hosts count
- priority: 0 # show in standard view
- JSONPath: .status.hosts
- - name: taskID
- type: string
- description: TaskID
- priority: 1 # show in wide view
- JSONPath: .status.taskID
- - name: status
- type: string
- description: CHI status
- priority: 0 # show in standard view
- JSONPath: .status.status
- - name: updated
- type: integer
- description: Updated hosts count
- priority: 1 # show in wide view
- JSONPath: .status.updated
- - name: added
- type: integer
- description: Added hosts count
- priority: 1 # show in wide view
- JSONPath: .status.added
- - name: deleted
- type: integer
- description: Hosts deleted count
- priority: 1 # show in wide view
- JSONPath: .status.deleted
- - name: delete
- type: integer
- description: Hosts to be deleted count
- priority: 1 # show in wide view
- JSONPath: .status.delete
- - name: endpoint
- type: string
- description: Client access endpoint
- priority: 1 # show in wide view
- JSONPath: .status.endpoint
- # TODO return to this feature later
- # Pruning unknown fields. FEATURE STATE: Kubernetes v1.15
- # Probably full specification may be needed
- # preserveUnknownFields: false
- validation:
- openAPIV3Schema:
- type: object
- properties:
- spec:
- type: object
- x-kubernetes-preserve-unknown-fields: true
- properties:
- taskID:
- type: string
- # Need to be StringBool
- stop:
- type: string
- enum:
- # List StringBoolXXX constants from model
- - ""
- - "0"
- - "1"
- - "False"
- - "false"
- - "True"
- - "true"
- - "No"
- - "no"
- - "Yes"
- - "yes"
- - "Off"
- - "off"
- - "On"
- - "on"
- - "Disable"
- - "disable"
- - "Enable"
- - "enable"
- - "Disabled"
- - "disabled"
- - "Enabled"
- - "enabled"
- # Need to be StringBool
- troubleshoot:
- type: string
- enum:
- # List StringBoolXXX constants from model
- - ""
- - "0"
- - "1"
- - "False"
- - "false"
- - "True"
- - "true"
- - "No"
- - "no"
- - "Yes"
- - "yes"
- - "Off"
- - "off"
- - "On"
- - "on"
- - "Disable"
- - "disable"
- - "Enable"
- - "enable"
- - "Disabled"
- - "disabled"
- - "Enabled"
- - "enabled"
- namespaceDomainPattern:
- type: string
- templating:
- type: object
- nullable: true
- properties:
- policy:
- type: string
- reconciling:
- type: object
- nullable: true
- properties:
- policy:
- type: string
- configMapPropagationTimeout:
- type: integer
- minimum: 0
- maximum: 3600
- cleanup:
- type: object
- nullable: true
- properties:
- unknownObjects:
- type: object
- nullable: true
- properties:
- statefulSet:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- pvc:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- configMap:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- service:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- reconcileFailedObjects:
- type: object
- nullable: true
- properties:
- statefulSet:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- pvc:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- configMap:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- service:
- type: string
- enum:
- # List ObjectsCleanupXXX constants from model
- - "Retain"
- - "Delete"
- defaults:
- type: object
- nullable: true
- properties:
- # Need to be StringBool
- replicasUseFQDN:
- type: string
- enum:
- # List StringBoolXXX constants from model
- - ""
- - "0"
- - "1"
- - "False"
- - "false"
- - "True"
- - "true"
- - "No"
- - "no"
- - "Yes"
- - "yes"
- - "Off"
- - "off"
- - "On"
- - "on"
- - "Disable"
- - "disable"
- - "Enable"
- - "enable"
- - "Disabled"
- - "disabled"
- - "Enabled"
- - "enabled"
- distributedDDL:
- type: object
- nullable: true
- properties:
- profile:
- type: string
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- configuration:
- type: object
- nullable: true
- properties:
- zookeeper:
- type: object
- nullable: true
- properties:
- nodes:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - host
- properties:
- host:
- type: string
- port:
- type: integer
- minimum: 0
- maximum: 65535
- session_timeout_ms:
- type: integer
- operation_timeout_ms:
- type: integer
- root:
- type: string
- identity:
- type: string
- users:
- type: object
- nullable: true
- profiles:
- type: object
- nullable: true
- quotas:
- type: object
- nullable: true
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- clusters:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- properties:
- name:
- type: string
- minLength: 1
- # See namePartClusterMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- zookeeper:
- type: object
- nullable: true
- properties:
- nodes:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - host
- properties:
- host:
- type: string
- port:
- type: integer
- minimum: 0
- maximum: 65535
- session_timeout_ms:
- type: integer
- operation_timeout_ms:
- type: integer
- root:
- type: string
- identity:
- type: string
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- layout:
- type: object
- nullable: true
- properties:
- # DEPRECATED - to be removed soon
- type:
- type: string
- shardsCount:
- type: integer
- replicasCount:
- type: integer
- shards:
- type: array
- nullable: true
- items:
- type: object
- properties:
- name:
- type: string
- minLength: 1
- # See namePartShardMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- # DEPRECATED - to be removed soon
- definitionType:
- type: string
- weight:
- type: integer
- # Need to be StringBool
- internalReplication:
- type: string
- enum:
- # List StringBoolXXX constants from model
- - ""
- - "0"
- - "1"
- - "False"
- - "false"
- - "True"
- - "true"
- - "No"
- - "no"
- - "Yes"
- - "yes"
- - "Off"
- - "off"
- - "On"
- - "on"
- - "Disable"
- - "disable"
- - "Enable"
- - "enable"
- - "Disabled"
- - "disabled"
- - "Enabled"
- - "enabled"
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- replicasCount:
- type: integer
- minimum: 1
- replicas:
- type: array
- nullable: true
- items:
- # Host
- type: object
- properties:
- name:
- type: string
- minLength: 1
- # See namePartReplicaMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- tcpPort:
- type: integer
- minimum: 1
- maximum: 65535
- httpPort:
- type: integer
- minimum: 1
- maximum: 65535
- interserverHttpPort:
- type: integer
- minimum: 1
- maximum: 65535
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- replicas:
- type: array
- nullable: true
- items:
- type: object
- properties:
- name:
- type: string
- minLength: 1
- # See namePartShardMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTeampate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- shardsCount:
- type: integer
- minimum: 1
- shards:
- type: array
- nullable: true
- items:
- # Host
- type: object
- properties:
- name:
- type: string
- minLength: 1
- # See namePartReplicaMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- tcpPort:
- type: integer
- minimum: 1
- maximum: 65535
- httpPort:
- type: integer
- minimum: 1
- maximum: 65535
- interserverHttpPort:
- type: integer
- minimum: 1
- maximum: 65535
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
- templates:
- type: object
- nullable: true
- properties:
- hostTemplates:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- properties:
- name:
- type: string
- portDistribution:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - type
- properties:
- type:
- type: string
- enum:
- # List PortDistributionXXX constants
- - ""
- - "Unspecified"
- - "ClusterScopeIndex"
- spec:
- # Host
- type: object
- properties:
- name:
- type: string
- minLength: 1
- # See namePartReplicaMaxLen const
- maxLength: 15
- pattern: "^[a-zA-Z0-9-]{0,15}$"
- tcpPort:
- type: integer
- minimum: 1
- maximum: 65535
- httpPort:
- type: integer
- minimum: 1
- maximum: 65535
- interserverHttpPort:
- type: integer
- minimum: 1
- maximum: 65535
- settings:
- type: object
- nullable: true
- files:
- type: object
- nullable: true
- templates:
- type: object
- nullable: true
- properties:
- hostTemplate:
- type: string
- podTemplate:
- type: string
- dataVolumeClaimTemplate:
- type: string
- logVolumeClaimTemplate:
- type: string
- serviceTemplate:
- type: string
- clusterServiceTemplate:
- type: string
- shardServiceTemplate:
- type: string
- replicaServiceTemplate:
- type: string
-
- podTemplates:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- properties:
- name:
- type: string
- generateName:
- type: string
- zone:
- type: object
- #required:
- # - values
- properties:
- key:
- type: string
- values:
- type: array
- nullable: true
- items:
- type: string
- distribution:
- # DEPRECATED
- type: string
- enum:
- - ""
- - "Unspecified"
- - "OnePerHost"
- podDistribution:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - type
- properties:
- type:
- type: string
- enum:
- # List PodDistributionXXX constants
- - ""
- - "Unspecified"
- - "ClickHouseAntiAffinity"
- - "ShardAntiAffinity"
- - "ReplicaAntiAffinity"
- - "AnotherNamespaceAntiAffinity"
- - "AnotherClickHouseInstallationAntiAffinity"
- - "AnotherClusterAntiAffinity"
- - "MaxNumberPerNode"
- - "NamespaceAffinity"
- - "ClickHouseInstallationAffinity"
- - "ClusterAffinity"
- - "ShardAffinity"
- - "ReplicaAffinity"
- - "PreviousTailAffinity"
- - "CircularReplication"
- scope:
- type: string
- enum:
- # list PodDistributionScopeXXX constants
- - ""
- - "Unspecified"
- - "Shard"
- - "Replica"
- - "Cluster"
- - "ClickHouseInstallation"
- - "Namespace"
- number:
- type: integer
- minimum: 0
- maximum: 65535
- spec:
- # TODO specify PodSpec
- type: object
- nullable: true
- volumeClaimTemplates:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- # - spec
- properties:
- name:
- type: string
- reclaimPolicy:
- type: string
- enum:
- - ""
- - "Retain"
- - "Delete"
- metadata:
- type: object
- nullable: true
- spec:
- # TODO specify PersistentVolumeClaimSpec
- type: object
- nullable: true
- serviceTemplates:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- # - spec
- properties:
- name:
- type: string
- generateName:
- type: string
- metadata:
- # TODO specify ObjectMeta
- type: object
- nullable: true
- spec:
- # TODO specify ServiceSpec
- type: object
- nullable: true
- useTemplates:
- type: array
- nullable: true
- items:
- type: object
- #required:
- # - name
- properties:
- name:
- type: string
- namespace:
- type: string
- useType:
- type: string
- enum:
- # List useTypeXXX constants from model
- - ""
- - "merge"
- ---
- apiVersion: apiextensions.k8s.io/v1beta1
- kind: CustomResourceDefinition
- metadata:
- name: clickhouseoperatorconfigurations.clickhouse.altinity.com
- spec:
- group: clickhouse.altinity.com
- version: v1
- scope: Namespaced
- names:
- kind: ClickHouseOperatorConfiguration
- singular: clickhouseoperatorconfiguration
- plural: clickhouseoperatorconfigurations
- shortNames:
- - chopconf
- additionalPrinterColumns:
- - name: namespaces
- type: string
- description: Watch namespaces
- priority: 0 # show in standard view
- JSONPath: .status
- # TODO return to this feature later
- # Pruning unknown fields. FEATURE STATE: Kubernetes v1.15
- # Probably full specification may be needed
- # preserveUnknownFields: false
- validation:
- openAPIV3Schema:
- type: object
- properties:
- spec:
- type: object
- x-kubernetes-preserve-unknown-fields: true
- properties:
- watchNamespaces:
- type: array
- items:
- type: string
- chCommonConfigsPath:
- type: string
- chHostConfigsPath:
- type: string
- chUsersConfigsPath:
- type: string
- chiTemplatesPath:
- type: string
- statefulSetUpdateTimeout:
- type: integer
- statefulSetUpdatePollPeriod:
- type: integer
- onStatefulSetCreateFailureAction:
- type: string
- onStatefulSetUpdateFailureAction:
- type: string
- chConfigUserDefaultProfile:
- type: string
- chConfigUserDefaultQuota:
- type: string
- chConfigUserDefaultNetworksIP:
- type: array
- items:
- type: string
- chConfigUserDefaultPassword:
- type: string
- chConfigNetworksHostRegexpTemplate:
- type: string
- chUsername:
- type: string
- chPassword:
- type: string
- chCredentialsSecretNamespace:
- type: string
- chCredentialsSecretName:
- type: string
- chPort:
- type: integer
- minimum: 1
- maximum: 65535
- logtostderr:
- type: string
- alsologtostderr:
- type: string
- v:
- type: string
- stderrthreshold:
- type: string
- vmodule:
- type: string
- log_backtrace_at:
- type: string
- reconcileThreadsNumber:
- type: integer
- minimum: 1
- maximum: 65535
- reconcileWaitExclude:
- type: string
- reconcileWaitInclude:
- type: string
- excludeFromPropagationLabels:
- type: array
- items:
- type: string
- appendScopeLabels:
- type: string
- enum:
- # List StringBoolXXX constants from model
- - ""
- - "0"
- - "1"
- - "False"
- - "false"
- - "True"
- - "true"
- - "No"
- - "no"
- - "Yes"
- - "yes"
- - "Off"
- - "off"
- - "On"
- - "on"
- - "Disable"
- - "disable"
- - "Enable"
- - "enable"
- - "Disabled"
- - "disabled"
- - "Enabled"
- - "enabled"
- ---
- # Possible Template Parameters:
- #
- # kube-system
- #
- # Setup ServiceAccount
- # ServiceAccount would be created in kubectl-specified namespace
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: clickhouse-operator
- namespace: kube-system
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRole
- metadata:
- name: clickhouse-operator-kube-system
- rules:
- - apiGroups:
- - ""
- resources:
- - configmaps
- - services
- verbs:
- - create
- - delete
- - get
- - patch
- - update
- - apiGroups:
- - ""
- resources:
- - events
- verbs:
- - create
- - apiGroups:
- - ""
- resources:
- - persistentvolumeclaims
- verbs:
- - delete
- - get
- - list
- - patch
- - update
- - watch
- - apiGroups:
- - ""
- resources:
- - persistentvolumes
- - pods
- verbs:
- - get
- - list
- - patch
- - update
- - watch
- - apiGroups:
- - apps
- resources:
- - statefulsets
- verbs:
- - create
- - delete
- - get
- - patch
- - update
- - apiGroups:
- - clickhouse.altinity.com
- resources:
- - clickhouseinstallations
- verbs:
- - delete
- - get
- - patch
- - update
- - apiGroups:
- - apps
- resourceNames:
- - clickhouse-operator
- resources:
- - deployments
- verbs:
- - get
- - patch
- - update
- - delete
- - apiGroups:
- - apps
- resources:
- - replicasets
- verbs:
- - delete
- - get
- - patch
- - update
- - apiGroups:
- - ""
- resources:
- - configmaps
- - endpoints
- - services
- verbs:
- - get
- - list
- - watch
- - apiGroups:
- - apps
- resources:
- - statefulsets
- verbs:
- - get
- - list
- - watch
- - apiGroups:
- - clickhouse.altinity.com
- resources:
- - clickhouseinstallations
- - clickhouseinstallationtemplates
- - clickhouseoperatorconfigurations
- verbs:
- - get
- - list
- - watch
- ---
- # Setup ClusterRoleBinding between ClusterRole and ServiceAccount.
- # ClusterRoleBinding is namespace-less and must have unique name
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- name: clickhouse-operator-kube-system
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: clickhouse-operator-kube-system
- subjects:
- - kind: ServiceAccount
- name: clickhouse-operator
- namespace: kube-system
- ---
- # Possible Template Parameters:
- #
- # kube-system
- # altinity/clickhouse-operator:0.15.0
- # etc-clickhouse-operator-files
- #
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: etc-clickhouse-operator-files
- namespace: kube-system
- labels:
- app: clickhouse-operator
- data:
- config.yaml: |
- ################################################
- ##
- ## Watch Namespaces Section
- ##
- ################################################
-
- # List of namespaces where clickhouse-operator watches for events.
- # Concurrently running operators should watch on different namespaces
- #watchNamespaces:
- # - dev
- # - test
- # - info
- # - onemore
-
- ################################################
- ##
- ## Additional Configuration Files Section
- ##
- ################################################
-
- # Path to folder where ClickHouse configuration files common for all instances within CHI are located.
- chCommonConfigsPath: config.d
-
- # Path to folder where ClickHouse configuration files unique for each instance (host) within CHI are located.
- chHostConfigsPath: conf.d
-
- # Path to folder where ClickHouse configuration files with users settings are located.
- # Files are common for all instances within CHI
- chUsersConfigsPath: users.d
-
- # Path to folder where ClickHouseInstallation .yaml manifests are located.
- # Manifests are applied in sorted alpha-numeric order
- chiTemplatesPath: templates.d
-
- ################################################
- ##
- ## Cluster Create/Update/Delete Objects Section
- ##
- ################################################
-
- # How many seconds to wait for created/updated StatefulSet to be Ready
- statefulSetUpdateTimeout: 300
-
- # How many seconds to wait between checks for created/updated StatefulSet status
- statefulSetUpdatePollPeriod: 5
-
- # What to do in case created StatefulSet is not in Ready after `statefulSetUpdateTimeout` seconds
- # Possible options:
- # 1. abort - do nothing, just break the process and wait for admin
- # 2. delete - delete newly created problematic StatefulSet
- # 3. ignore - ignore error, pretend nothing happened and move on to the next StatefulSet
- onStatefulSetCreateFailureAction: ignore
-
- # What to do in case updated StatefulSet is not in Ready after `statefulSetUpdateTimeout` seconds
- # Possible options:
- # 1. abort - do nothing, just break the process and wait for admin
- # 2. rollback - delete Pod and rollback StatefulSet to previous Generation.
- # Pod would be recreated by StatefulSet based on rollback-ed configuration
- # 3. ignore - ignore error, pretend nothing happened and move on to the next StatefulSet
- onStatefulSetUpdateFailureAction: rollback
-
- ################################################
- ##
- ## ClickHouse Settings Section
- ##
- ################################################
-
- # Default values for ClickHouse user configuration
- # 1. user/profile - string
- # 2. user/quota - string
- # 3. user/networks/ip - multiple strings
- # 4. user/password - string
- chConfigUserDefaultProfile: default
- chConfigUserDefaultQuota: default
- chConfigUserDefaultNetworksIP:
- - "::1"
- - "127.0.0.1"
- chConfigUserDefaultPassword: "default"
-
- # Default host_regexp to limit network connectivity from outside
- chConfigNetworksHostRegexpTemplate: "(chi-{chi}-[^.]+\\d+-\\d+|clickhouse\\-{chi})\\.{namespace}\\.svc\\.cluster\\.local$"
-
- ################################################
- ##
- ## Access to ClickHouse instances
- ##
- ################################################
-
- # ClickHouse credentials (username, password and port) to be used by operator to connect to ClickHouse instances
- # for:
- # 1. Metrics requests
- # 2. Schema maintenance
- # 3. DROP DNS CACHE
- # User with such credentials can be specified in additional ClickHouse .xml config files,
- # located in `chUsersConfigsPath` folder
- chUsername: clickhouse_operator
- chPassword: clickhouse_operator_password
-
- # Location of k8s Secret with username and password to be used by operator to connect to ClickHouse instances
- # Can be used instead of explicitly specified username and password
- chCredentialsSecretNamespace: ""
- chCredentialsSecretName: ""
-
- # Port where to connect to ClickHouse instances to
- chPort: 8123
-
- ################################################
- ##
- ## Log parameters
- ##
- ################################################
-
- logtostderr: "true"
- alsologtostderr: "false"
- v: "1"
- stderrthreshold: ""
- vmodule: ""
- log_backtrace_at: ""
-
- ################################################
- ##
- ## Runtime parameters
- ##
- ################################################
-
- # Max number of concurrent reconciles in progress
- reconcileThreadsNumber: 10
- reconcileWaitExclude: false
- reconcileWaitInclude: false
-
- ################################################
- ##
- ## Labels management parameters
- ##
- ################################################
-
- # When propagating labels from the chi's `metadata.labels` section to child objects' `metadata.labels`,
- # exclude labels from the following list:
- #excludeFromPropagationLabels:
- # - "labelA"
- # - "labelB"
-
- # Whether to append *Scope* labels to StatefulSet and Pod.
- # Full list of available *scope* labels check in labeler.go
- # LabelShardScopeIndex
- # LabelReplicaScopeIndex
- # LabelCHIScopeIndex
- # LabelCHIScopeCycleSize
- # LabelCHIScopeCycleIndex
- # LabelCHIScopeCycleOffset
- # LabelClusterScopeIndex
- # LabelClusterScopeCycleSize
- # LabelClusterScopeCycleIndex
- # LabelClusterScopeCycleOffset
- appendScopeLabels: "no"
- ---
- # Possible Template Parameters:
- #
- # kube-system
- # altinity/clickhouse-operator:0.15.0
- # etc-clickhouse-operator-confd-files
- #
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: etc-clickhouse-operator-confd-files
- namespace: kube-system
- labels:
- app: clickhouse-operator
- data:
- ---
- # Possible Template Parameters:
- #
- # kube-system
- # altinity/clickhouse-operator:0.15.0
- # etc-clickhouse-operator-configd-files
- #
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: etc-clickhouse-operator-configd-files
- namespace: kube-system
- labels:
- app: clickhouse-operator
- data:
- 01-clickhouse-01-listen.xml: |
- <yandex>
- <!-- Listen wildcard address to allow accepting connections from other containers and host network. -->
- <listen_host>::</listen_host>
- <listen_host>0.0.0.0</listen_host>
- <listen_try>1</listen_try>
- </yandex>
- 01-clickhouse-02-logger.xml: |
- <yandex>
- <logger>
- <!-- Possible levels: https://github.com/pocoproject/poco/blob/develop/Foundation/include/Poco/Logger.h#L105 -->
- <level>debug</level>
- <log>/var/log/clickhouse-server/clickhouse-server.log</log>
- <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
- <size>1000M</size>
- <count>10</count>
- <!-- Default behavior is autodetection (log to console if not daemon mode and is tty) -->
- <console>1</console>
- </logger>
- </yandex>
- 01-clickhouse-03-query_log.xml: |
- <yandex>
- <query_log replace="1">
- <database>system</database>
- <table>query_log</table>
- <engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + interval 30 day</engine>
- <flush_interval_milliseconds>7500</flush_interval_milliseconds>
- </query_log>
- <query_thread_log remove="1"/>
- </yandex>
- 01-clickhouse-04-part_log.xml: |
- <yandex>
- <part_log replace="1">
- <database>system</database>
- <table>part_log</table>
- <engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + interval 30 day</engine>
- <flush_interval_milliseconds>7500</flush_interval_milliseconds>
- </part_log>
- </yandex>
- ---
- # Possible Template Parameters:
- #
- # kube-system
- # altinity/clickhouse-operator:0.15.0
- # etc-clickhouse-operator-templatesd-files
- #
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: etc-clickhouse-operator-templatesd-files
- namespace: kube-system
- labels:
- app: clickhouse-operator
- data:
- 001-templates.json.example: |
- {
- "apiVersion": "clickhouse.altinity.com/v1",
- "kind": "ClickHouseInstallationTemplate",
- "metadata": {
- "name": "01-default-volumeclaimtemplate"
- },
- "spec": {
- "templates": {
- "volumeClaimTemplates": [
- {
- "name": "chi-default-volume-claim-template",
- "spec": {
- "accessModes": [
- "ReadWriteOnce"
- ],
- "resources": {
- "requests": {
- "storage": "2Gi"
- }
- }
- }
- }
- ],
- "podTemplates": [
- {
- "name": "chi-default-oneperhost-pod-template",
- "distribution": "OnePerHost",
- "spec": {
- "containers" : [
- {
- "name": "clickhouse",
- "image": "yandex/clickhouse-server:19.3.7",
- "ports": [
- {
- "name": "http",
- "containerPort": 8123
- },
- {
- "name": "client",
- "containerPort": 9000
- },
- {
- "name": "interserver",
- "containerPort": 9009
- }
- ]
- }
- ]
- }
- }
- ]
- }
- }
- }
- default-pod-template.yaml.example: |
- apiVersion: "clickhouse.altinity.com/v1"
- kind: "ClickHouseInstallationTemplate"
- metadata:
- name: "default-oneperhost-pod-template"
- spec:
- templates:
- podTemplates:
- - name: default-oneperhost-pod-template
- distribution: "OnePerHost"
- default-storage-template.yaml.example: |
- apiVersion: "clickhouse.altinity.com/v1"
- kind: "ClickHouseInstallationTemplate"
- metadata:
- name: "default-storage-template-2Gi"
- spec:
- templates:
- volumeClaimTemplates:
- - name: default-storage-template-2Gi
- spec:
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: 2Gi
- readme: |
- Templates in this folder are packaged with an operator and available via 'useTemplate'
- ---
- # Possible Template Parameters:
- #
- # kube-system
- # altinity/clickhouse-operator:0.15.0
- # etc-clickhouse-operator-usersd-files
- #
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: etc-clickhouse-operator-usersd-files
- namespace: kube-system
- labels:
- app: clickhouse-operator
- data:
- 01-clickhouse-user.xml: |
- <yandex>
- <users>
- <clickhouse_operator>
- <networks>
- <ip>127.0.0.1</ip>
- <ip>0.0.0.0/0</ip>
- <ip>::/0</ip>
- </networks>
- <password_sha256_hex>716b36073a90c6fe1d445ac1af85f4777c5b7a155cea359961826a030513e448</password_sha256_hex>
- <profile>clickhouse_operator</profile>
- <quota>default</quota>
- </clickhouse_operator>
- </users>
- <profiles>
- <clickhouse_operator>
- <log_queries>0</log_queries>
- <skip_unavailable_shards>1</skip_unavailable_shards>
- <http_connection_timeout>10</http_connection_timeout>
- </clickhouse_operator>
- </profiles>
- </yandex>
- 02-clickhouse-default-profile.xml: |
- <yandex>
- <profiles>
- <default>
- <log_queries>1</log_queries>
- <connect_timeout_with_failover_ms>1000</connect_timeout_with_failover_ms>
- <distributed_aggregation_memory_efficient>1</distributed_aggregation_memory_efficient>
- <parallel_view_processing>1</parallel_view_processing>
- </default>
- </profiles>
- </yandex>
- 03-database-ordinary.xml: |
- <!-- Remove it for ClickHouse versions before 20.4 -->
- <yandex>
- <profiles>
- <default>
- <default_database_engine>Ordinary</default_database_engine>
- </default>
- </profiles>
- </yandex>
- ---
- # Possible Template Parameters:
- #
- # kube-system
- # altinity/clickhouse-operator:0.15.0
- # altinity/metrics-exporter:0.15.0
- #
- # Setup Deployment for clickhouse-operator
- # Deployment would be created in kubectl-specified namespace
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- name: clickhouse-operator
- namespace: kube-system
- labels:
- app: clickhouse-operator
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: clickhouse-operator
- template:
- metadata:
- labels:
- app: clickhouse-operator
- annotations:
- prometheus.io/port: '8888'
- prometheus.io/scrape: 'true'
- spec:
- serviceAccountName: clickhouse-operator
- volumes:
- - name: etc-clickhouse-operator-folder
- configMap:
- name: etc-clickhouse-operator-files
- - name: etc-clickhouse-operator-confd-folder
- configMap:
- name: etc-clickhouse-operator-confd-files
- - name: etc-clickhouse-operator-configd-folder
- configMap:
- name: etc-clickhouse-operator-configd-files
- - name: etc-clickhouse-operator-templatesd-folder
- configMap:
- name: etc-clickhouse-operator-templatesd-files
- - name: etc-clickhouse-operator-usersd-folder
- configMap:
- name: etc-clickhouse-operator-usersd-files
- containers:
- - name: clickhouse-operator
- image: altinity/clickhouse-operator:0.15.0
- imagePullPolicy: Always
- volumeMounts:
- - name: etc-clickhouse-operator-folder
- mountPath: /etc/clickhouse-operator
- - name: etc-clickhouse-operator-confd-folder
- mountPath: /etc/clickhouse-operator/conf.d
- - name: etc-clickhouse-operator-configd-folder
- mountPath: /etc/clickhouse-operator/config.d
- - name: etc-clickhouse-operator-templatesd-folder
- mountPath: /etc/clickhouse-operator/templates.d
- - name: etc-clickhouse-operator-usersd-folder
- mountPath: /etc/clickhouse-operator/users.d
- env:
- # Pod-specific
- # spec.nodeName: ip-172-20-52-62.ec2.internal
- - name: OPERATOR_POD_NODE_NAME
- valueFrom:
- fieldRef:
- fieldPath: spec.nodeName
- # metadata.name: clickhouse-operator-6f87589dbb-ftcsf
- - name: OPERATOR_POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- # metadata.namespace: kube-system
- - name: OPERATOR_POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- # status.podIP: 100.96.3.2
- - name: OPERATOR_POD_IP
- valueFrom:
- fieldRef:
- fieldPath: status.podIP
- # spec.serviceAccount: clickhouse-operator
- # spec.serviceAccountName: clickhouse-operator
- - name: OPERATOR_POD_SERVICE_ACCOUNT
- valueFrom:
- fieldRef:
- fieldPath: spec.serviceAccountName
-
- # Container-specific
- - name: OPERATOR_CONTAINER_CPU_REQUEST
- valueFrom:
- resourceFieldRef:
- containerName: clickhouse-operator
- resource: requests.cpu
- - name: OPERATOR_CONTAINER_CPU_LIMIT
- valueFrom:
- resourceFieldRef:
- containerName: clickhouse-operator
- resource: limits.cpu
- - name: OPERATOR_CONTAINER_MEM_REQUEST
- valueFrom:
- resourceFieldRef:
- containerName: clickhouse-operator
- resource: requests.memory
- - name: OPERATOR_CONTAINER_MEM_LIMIT
- valueFrom:
- resourceFieldRef:
- containerName: clickhouse-operator
- resource: limits.memory
-
- - name: metrics-exporter
- image: altinity/metrics-exporter:0.15.0
- imagePullPolicy: Always
- volumeMounts:
- - name: etc-clickhouse-operator-folder
- mountPath: /etc/clickhouse-operator
- - name: etc-clickhouse-operator-confd-folder
- mountPath: /etc/clickhouse-operator/conf.d
- - name: etc-clickhouse-operator-configd-folder
- mountPath: /etc/clickhouse-operator/config.d
- - name: etc-clickhouse-operator-templatesd-folder
- mountPath: /etc/clickhouse-operator/templates.d
- - name: etc-clickhouse-operator-usersd-folder
- mountPath: /etc/clickhouse-operator/users.d
- ---
- # Possible Template Parameters:
- #
- # kube-system
- #
- # Setup ClusterIP Service to provide monitoring metrics for Prometheus
- # Service would be created in kubectl-specified namespace
- # In order to get access outside of k8s it should be exposed as:
- # kubectl --namespace prometheus port-forward service/prometheus 9090
- # and point browser to localhost:9090
- kind: Service
- apiVersion: v1
- metadata:
- name: clickhouse-operator-metrics
- namespace: kube-system
- labels:
- app: clickhouse-operator
- spec:
- ports:
- - port: 8888
- name: clickhouse-operator-metrics
- selector:
- app: clickhouse-operator
我把其中的命名空间kube-system换成了ckk8s,因为在要部署的集群上已经有了kube-system命名空间!
2)部署
- kubectl apply -f clickhouse-operator-install.yaml -n ckk8s
-
- 输出为:
- customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.altinity.com configured
- customresourcedefinition.apiextensions.k8s.io/clickhouseinstallationtemplates.clickhouse.altinity.com configured
- customresourcedefinition.apiextensions.k8s.io/clickhouseoperatorconfigurations.clickhouse.altinity.com configured
- serviceaccount/clickhouse-operator created
- clusterrole.rbac.authorization.k8s.io/clickhouse-operator-ckk8s created
- clusterrolebinding.rbac.authorization.k8s.io/clickhouse-operator-ckk8s created
- configmap/etc-clickhouse-operator-files created
- configmap/etc-clickhouse-operator-confd-files created
- configmap/etc-clickhouse-operator-configd-files created
- configmap/etc-clickhouse-operator-templatesd-files created
- configmap/etc-clickhouse-operator-usersd-files created
- deployment.apps/clickhouse-operator created
- service/clickhouse-operator-metrics created
3)查看
- kubectl get pod -n ckk8s
-
- 输出:
- clickhouse-operator-7944c8f9c-qcvvd 2/2 Running 0 109s
vim zookeeper-3-node.yaml
yaml内容为:
- # Setup Service to provide access to Zookeeper for clients
- apiVersion: v1
- kind: Service
- metadata:
- # DNS would be like zookeeper.zoons
- name: zookeeper
- labels:
- app: zookeeper
- spec:
- ports:
- - port: 2181
- name: client
- - port: 7000
- name: prometheus
- selector:
- app: zookeeper
- what: node
- ---
- # Setup Headless Service for StatefulSet
- apiVersion: v1
- kind: Service
- metadata:
- # DNS would be like zookeeper-0.zookeepers.etc
- name: zookeepers
- labels:
- app: zookeeper
- spec:
- ports:
- - port: 2888
- name: server
- - port: 3888
- name: leader-election
- clusterIP: None
- selector:
- app: zookeeper
- what: node
- ---
- # Setup max number of unavailable pods in StatefulSet
- apiVersion: policy/v1beta1
- kind: PodDisruptionBudget
- metadata:
- name: zookeeper-pod-disruption-budget
- spec:
- selector:
- matchLabels:
- app: zookeeper
- maxUnavailable: 1
- ---
- # Setup Zookeeper StatefulSet
- # Possible params:
- # 1. replicas
- # 2. memory
- # 3. cpu
- # 4. storage
- # 5. storageClassName
- # 6. user to run app
- apiVersion: apps/v1
- kind: StatefulSet
- metadata:
- # nodes would be named as zookeeper-0, zookeeper-1, zookeeper-2
- name: zookeeper
- spec:
- selector:
- matchLabels:
- app: zookeeper
- serviceName: zookeepers
- replicas: 3
- updateStrategy:
- type: RollingUpdate
- podManagementPolicy: Parallel
- template:
- metadata:
- labels:
- app: zookeeper
- what: node
- annotations:
- prometheus.io/port: '7000'
- prometheus.io/scrape: 'true'
- spec:
- affinity:
- podAntiAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- - labelSelector:
- matchExpressions:
- - key: "app"
- operator: In
- values:
- - zookeeper
- topologyKey: "kubernetes.io/hostname"
- containers:
- - name: kubernetes-zookeeper
- imagePullPolicy: IfNotPresent
- image: "docker.io/zookeeper:3.6.3"
- resources:
- requests:
- memory: "512M"
- cpu: "1"
- limits:
- memory: "4Gi"
- cpu: "2"
- ports:
- - containerPort: 2181
- name: client
- - containerPort: 2888
- name: server
- - containerPort: 3888
- name: leader-election
- - containerPort: 7000
- name: prometheus
- # See those links for proper startup settings:
- # https://github.com/kow3ns/kubernetes-zookeeper/blob/master/docker/scripts/start-zookeeper
- # https://clickhouse.yandex/docs/en/operations/tips/#zookeeper
- # https://github.com/ClickHouse/ClickHouse/issues/11781
- command:
- - bash
- - -x
- - -c
- - |
- SERVERS=3 &&
- HOST=`hostname -s` &&
- DOMAIN=`hostname -d` &&
- CLIENT_PORT=2181 &&
- SERVER_PORT=2888 &&
- ELECTION_PORT=3888 &&
- PROMETHEUS_PORT=7000 &&
- ZOO_DATA_DIR=/var/lib/zookeeper/data &&
- ZOO_DATA_LOG_DIR=/var/lib/zookeeper/datalog &&
- {
- echo "clientPort=${CLIENT_PORT}"
- echo 'tickTime=2000'
- echo 'initLimit=300'
- echo 'syncLimit=10'
- echo 'maxClientCnxns=2000'
- echo 'maxSessionTimeout=60000000'
- echo "dataDir=${ZOO_DATA_DIR}"
- echo "dataLogDir=${ZOO_DATA_LOG_DIR}"
- echo 'autopurge.snapRetainCount=10'
- echo 'autopurge.purgeInterval=1'
- echo 'preAllocSize=131072'
- echo 'snapCount=3000000'
- echo 'leaderServes=yes'
- echo 'standaloneEnabled=false'
- echo '4lw.commands.whitelist=*'
- echo 'metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider'
- echo "metricsProvider.httpPort=${PROMETHEUS_PORT}"
- } > /conf/zoo.cfg &&
- {
- echo "zookeeper.root.logger=CONSOLE"
- echo "zookeeper.console.threshold=INFO"
- echo "log4j.rootLogger=\${zookeeper.root.logger}"
- echo "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender"
- echo "log4j.appender.CONSOLE.Threshold=\${zookeeper.console.threshold}"
- echo "log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout"
- echo "log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n"
- } > /conf/log4j.properties &&
- echo 'JVMFLAGS="-Xms128M -Xmx4G -XX:+UseG1GC -XX:+CMSParallelRemarkEnabled"' > /conf/java.env &&
- if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then
- NAME=${BASH_REMATCH[1]}
- ORD=${BASH_REMATCH[2]}
- else
- echo "Failed to parse name and ordinal of Pod"
- exit 1
- fi &&
- mkdir -p ${ZOO_DATA_DIR} &&
- mkdir -p ${ZOO_DATA_LOG_DIR} &&
- export MY_ID=$((ORD+1)) &&
- echo $MY_ID > $ZOO_DATA_DIR/myid &&
- for (( i=1; i<=$SERVERS; i++ )); do
- echo "server.$i=$NAME-$((i-1)).$DOMAIN:$SERVER_PORT:$ELECTION_PORT" >> /conf/zoo.cfg;
- done &&
- chown -Rv zookeeper "$ZOO_DATA_DIR" "$ZOO_DATA_LOG_DIR" "$ZOO_LOG_DIR" "$ZOO_CONF_DIR" &&
- zkServer.sh start-foreground
- readinessProbe:
- exec:
- command:
- - bash
- - -c
- - "OK=$(echo ruok | nc 127.0.0.1 2181); if [[ \"$OK\" == \"imok\" ]]; then exit 0; else exit 1; fi"
- initialDelaySeconds: 10
- timeoutSeconds: 5
- livenessProbe:
- exec:
- command:
- - bash
- - -c
- - "OK=$(echo ruok | nc 127.0.0.1 2181); if [[ \"$OK\" == \"imok\" ]]; then exit 0; else exit 1; fi"
- initialDelaySeconds: 10
- timeoutSeconds: 5
- volumeMounts:
- - name: datadir-volume
- mountPath: /var/lib/zookeeper
- # Run as a non-privileged user
- securityContext:
- runAsUser: 1000
- fsGroup: 1000
- volumeClaimTemplates:
- - metadata:
- name: datadir-volume
- spec:
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: 25Gi
2)部署zookeeper
- kubectl apply -f zookeeper-3-node.yaml -n ckk8s
-
-
- 输出:
- service/zookeeper created
- service/zookeepers created
- poddisruptionbudget.policy/zookeeper-pod-disruption-budget created
- statefulset.apps/zookeeper created
3)查看zookeeper集群
查看集群中所有pod
- kubectl get pod -n ckk8s
-
- 输出为:
- NAME READY STATUS RESTARTS AGE
- zookeeper-0 1/1 Running 0 2m29s
- zookeeper-1 1/1 Running 0 2m29s
- zookeeper-2 1/1 Running 0 2m28s
查看服务
- kubectl get service -n ckk8s
-
-
- 输出
-
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- zookeeper ClusterIP 10.96.93.181 <none> 2181/TCP,7000/TCP 2m46s
- zookeepers ClusterIP None <none> 2888/TCP,3888/TCP 2m46s
查看statefulsets
- kubectl get statefulset -n ckk8s
-
- 输出
-
- NAME READY AGE
- zookeeper 3/3 4m34s
此时zookeeper集群正常启动,且在运行
1)部署1分片1副本
创建文件sample-1shard-1replica.yaml
yaml内容为:
- apiVersion: "clickhouse.altinity.com/v1"
- kind: "ClickHouseInstallation"
- metadata:
- name: "sample-01"
- spec:
- configuration:
- clusters:
- - name: "cluster1"
- layout:
- shardsCount: 1
- replicasCount: 1
部署
- kubectl apply -f sample-1shard-1replica.yaml
-
- 输出:
- clickhouseinstallation.clickhouse.altinity.com/sample-01 created
2)查看pod是否运行
- kubectl get pod -n ckk8s
-
- 输出:
- NAME READY STATUS RESTARTS AGE
- chi-sample-01-cluster1-0-0-0 1/1 Running 0 85s
3)查看服务
- kubectl get service -n ckk8s
-
- 输出:
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- chi-simple-01-cluster-0-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 67s
-
- clickhouse-simple-01 LoadBalancer 10.104.76.159 <pending> 8123:30890/TCP,9000:32659/TCP 69s
4)连接clickhouse数据库
第一种方式:使用clickhouse-client访问集群
默认的用户名和密码在clickhouse-operator-install.yaml文件设置,可以自行修改,默认的值为:
- chUsername: clickhouse_operator
- chPassword:clickhouse_operator_password
-
-
- ///见clickhouse-operator-install.yaml文件第2048-2049行
此时访问方式如下:
clickhouse-client --host hostname --user=clickhouse_operator --password=clickhouse_operator_password
hostname的值为第三步clickhouse-sample-01对应的external-ip的值,但此处为<pending>所以暂时无法通过这种方式访问
第二种方式:使用kubcetl exec -it访问,后面接的是pod的name
- kubectl -n ckk8s exec -it chi-sample-01-cluster1-0-0-0 -- clickhouse-client
-
-
- 输出为:
- ClickHouse client version 21.8.5.7 (official build).
- Connecting to localhost:9000 as user default.
- Connected to ClickHouse server version 21.8.5 revision 54449.
-
- chi-simple-01-cluster-0-0-0.chi-simple-01-cluster-0-0.ckk8s.svc.cluster.local :)
查看集群名字
- select * from system.clusters
-
-
-
-
- ┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name──────────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
- │ all-replicated │ 1 │ 1 │ 1 │ chi-sample-01-cluster1-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ all-sharded │ 1 │ 1 │ 1 │ chi-sample-01-cluster1-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ cluster1 │ 1 │ 1 │ 1 │ chi-sample-01-cluster1-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_internal_replication │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_internal_replication │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9440 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 1 │ 0 │ default │ │ 0 │ 0 │ 0 │
- └──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴────────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘
此时可以正常使用
1)创建sample-2shard_1replica.yaml文件
内容如下:
- apiVersion: "clickhouse.altinity.com/v1"
- kind: "ClickHouseInstallation"
- metadata:
- name: "sample-01"
- spec:
- configuration:
- clusters:
- - name: "cluster1"
- layout:
- shardsCount: 2
- replicasCount: 1
只需要修改shardCount的数量
2)部署
- kubectl apply -f sample-2shard_1replica.yaml -n ckk8s
-
- clickhouseinstallation.clickhouse.altinity.com/sample-01 configured
3)查看pod
- kubectl get pod -n ckk8s
-
- NAME READY STATUS RESTARTS AGE
- chi-sample-01-cluster1-0-0-0 1/1 Running 0 21m
- chi-sample-01-cluster1-1-0-0 1/1 Running 0 36s
从时间上可以发现第二个是新建的
4)查看service
- kubectl get service -n ckk8s
-
-
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- chi-sample-01-cluster1-0-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 22m
- chi-sample-01-cluster1-1-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 61s
- clickhouse-sample-01 LoadBalancer 10.96.69.232 <pending> 8123:32766/TCP,9000:30993/TCP 22m
从时间上可以发现chi-sample-01-cluster1-1-0为61s,是新建的
5)连接clickhouse数据库
- kubectl -n ckk8s exec -it chi-sample-01-cluster1-0-0-0 -- clickhouse-client
-
-
- kubectl -n ckk8s exec -it chi-sample-01-cluster1-1-0-0 -- clickhouse-client
6)查看集群
2个节点上的查看结果如下
- select * from system.clusters
-
- ┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name──────────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
- │ all-replicated │ 1 │ 1 │ 1 │ chi-sample-01-cluster1-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ all-replicated │ 1 │ 1 │ 2 │ chi-sample-01-cluster1-1-0 │ 10.217.8.27 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ all-sharded │ 1 │ 1 │ 1 │ chi-sample-01-cluster1-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ all-sharded │ 2 │ 1 │ 1 │ chi-sample-01-cluster1-1-0 │ 10.217.8.27 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ cluster1 │ 1 │ 1 │ 1 │ chi-sample-01-cluster1-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ cluster1 │ 2 │ 1 │ 1 │ chi-sample-01-cluster1-1-0 │ 10.217.8.27 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_internal_replication │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_internal_replication │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9440 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 1 │ 0 │ default │ │ 0 │ 0 │ 0 │
- └──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴────────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘
为什么2个cluster1??????
- kubectl -n ckk8s exec -it chi-sample-01-cluster1-1-0-0 -- bash
-
-
- root@chi-sample-01-cluster1-1-0-0:/# ls
- bin boot dev docker-entrypoint-initdb.d entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
其实就是进入了clickhouse-server容器
1)查看默认服务端配置文件
- cd /etc/clickhouse-server
-
-
- root@chi-sample-01-cluster1-1-0-0:/# cd /etc/clickhouse-server
- root@chi-sample-01-cluster1-1-0-0:/etc/clickhouse-server# ls
- conf.d config.d config.xml users.d users.xml
2)查看默认数据存储目录
- root@chi-sample-01-cluster1-1-0-0:/var/log/clickhouse-server# cd /var/lib/clickhouse
- root@chi-sample-01-cluster1-1-0-0:/var/lib/clickhouse# ls
- access data dictionaries_lib flags format_schemas metadata metadata_dropped preprocessed_configs status store tmp user_files
- root@chi-sample-01-cluster1-1-0-0:/var/lib/clickhouse#
3)查看默认日志目录
- root@chi-sample-01-cluster1-1-0-0:/etc/clickhouse-server# cd /var/log/clickhouse-server
- root@chi-sample-01-cluster1-1-0-0:/var/log/clickhouse-server# ls
- clickhouse-server.err.log clickhouse-server.log
4)查看配置文件中的分片和副本信息
打开config.d下的chop-generated-remote_servers.xml
- root@chi-sample-01-cluster1-0-0-0:/etc/clickhouse-server/config.d# cat chop-generated-remote_servers.xml
- <yandex>
- <remote_servers>
- <!-- User-specified clusters -->
- <cluster1>
- <shard>
- <internal_replication>false</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-0-0</host>
- <port>9000</port>
- </replica>
- </shard>
- <shard>
- <internal_replication>false</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-1-0</host>
- <port>9000</port>
- </replica>
- </shard>
- </cluster1>
- <!-- Autogenerated clusters -->
- <all-replicated>
- <shard>
- <internal_replication>true</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-0-0</host>
- <port>9000</port>
- </replica>
- <replica>
- <host>chi-sample-01-cluster1-1-0</host>
- <port>9000</port>
- </replica>
- </shard>
- </all-replicated>
- <all-sharded>
- <shard>
- <internal_replication>false</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-0-0</host>
- <port>9000</port>
- </replica>
- </shard>
- <shard>
- <internal_replication>false</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-1-0</host>
- <port>9000</port>
- </replica>
- </shard>
- </all-sharded>
- </remote_servers>
- </yandex>
可以发现此处的配置文件指给出了分片和副本的信息
第3部分的示例中还没有使用到zookeeper,clickhouse-operator不提供zookeeper,zookeeper必须外部安装。第2部分我们已经安装了zookeeper
1)创建sample-2shard-2replica.yaml文件,内容如下
- apiVersion: "clickhouse.altinity.com/v1"
- kind: "ClickHouseInstallation"
- metadata:
- name: "sample-01"
- spec:
- configuration:
- zookeeper:
- nodes:
- - host: zookeeper.ckk8s
- port: 2181
- clusters:
- - name: "cluster1"
- layout:
- shardsCount: 2
- replicasCount: 2
2)部署
- kubectl apply -f sample-2shard-2replica.yaml -n ckk8s
-
- clickhouseinstallation.clickhouse.altinity.com/sample-01 configured
3)查看pod
- kubectl get pod -n ckk8s
-
- NAME READY STATUS RESTARTS AGE
- chi-sample-01-cluster1-0-0-0 1/1 Running 0 2m24s
- chi-sample-01-cluster1-0-1-0 0/1 ContainerCreating 0 2s
- chi-sample-01-cluster1-1-0-0 1/1 Running 0 5m10s
发现新的副本正在创建
4) 连接clickhouse数据库
- kubectl -n ckk8s exec -it chi-sample-01-cluster1-0-1-0 -- clickhouse-client
-
-
- ClickHouse client version 21.8.5.7 (official build).
- Connecting to localhost:9000 as user default.
- Connected to ClickHouse server version 21.8.5 revision 54449.
5)查看集群
- select * from system.clusters;
-
- cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name──────────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
- │ all-replicated │ 1 │ 1 │ 1 │ chi-sample-01-cluster1-0-0 │ 10.217.3.193 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ all-replicated │ 1 │ 1 │ 2 │ chi-sample-01-cluster1-0-1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ all-replicated │ 1 │ 1 │ 3 │ chi-sample-01-cluster1-1-0 │ 10.217.3.140 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ all-replicated │ 1 │ 1 │ 4 │ chi-sample-01-cluster1-1-1 │ 10.217.3.123 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ all-sharded │ 1 │ 1 │ 1 │ chi-sample-01-cluster1-0-0 │ 10.217.3.193 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ all-sharded │ 2 │ 1 │ 1 │ chi-sample-01-cluster1-0-1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ all-sharded │ 3 │ 1 │ 1 │ chi-sample-01-cluster1-1-0 │ 10.217.3.140 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ all-sharded │ 4 │ 1 │ 1 │ chi-sample-01-cluster1-1-1 │ 10.217.3.123 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ cluster1 │ 1 │ 1 │ 1 │ chi-sample-01-cluster1-0-0 │ 10.217.3.193 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ cluster1 │ 1 │ 1 │ 2 │ chi-sample-01-cluster1-0-1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ cluster1 │ 2 │ 1 │ 1 │ chi-sample-01-cluster1-1-0 │ 10.217.3.140 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ cluster1 │ 2 │ 1 │ 2 │ chi-sample-01-cluster1-1-1 │ 10.217.3.123 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_internal_replication │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_internal_replication │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9440 │ 0 │ default │ │ 0 │ 0 │ 0 │
- │ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
- │ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 1 │ 0 │ default │ │ 0 │ 0 │ 0 │
- └──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴────────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘
再次进入3.3部分的同一个pod
kubectl -n ckk8s exec -it chi-sample-01-cluster1-1-0-0 -- bash
1)查看分片和副本
- ///打开/etc/clickhouse-server/config.d/chop-generated-remote_servers.xml
-
- root@chi-sample-01-cluster1-0-0-0:/etc/clickhouse-server/config.d# cat chop-generated-remote_servers.xml
- <yandex>
- <remote_servers>
- <!-- User-specified clusters -->
- <cluster1>
- <shard>
- <internal_replication>true</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-0-0</host>
- <port>9000</port>
- </replica>
- <replica>
- <host>chi-sample-01-cluster1-0-1</host>
- <port>9000</port>
- </replica>
- </shard>
- <shard>
- <internal_replication>true</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-1-0</host>
- <port>9000</port>
- </replica>
- <replica>
- <host>chi-sample-01-cluster1-1-1</host>
- <port>9000</port>
- </replica>
- </shard>
- </cluster1>
- <!-- Autogenerated clusters -->
- <all-replicated>
- <shard>
- <internal_replication>true</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-0-0</host>
- <port>9000</port>
- </replica>
- <replica>
- <host>chi-sample-01-cluster1-0-1</host>
- <port>9000</port>
- </replica>
- <replica>
- <host>chi-sample-01-cluster1-1-0</host>
- <port>9000</port>
- </replica>
- <replica>
- <host>chi-sample-01-cluster1-1-1</host>
- <port>9000</port>
- </replica>
- </shard>
- </all-replicated>
- <all-sharded>
- <shard>
- <internal_replication>false</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-0-0</host>
- <port>9000</port>
- </replica>
- </shard>
- <shard>
- <internal_replication>false</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-0-1</host>
- <port>9000</port>
- </replica>
- </shard>
- <shard>
- <internal_replication>false</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-1-0</host>
- <port>9000</port>
- </replica>
- </shard>
- <shard>
- <internal_replication>false</internal_replication>
- <replica>
- <host>chi-sample-01-cluster1-1-1</host>
- <port>9000</port>
- </replica>
- </shard>
- </all-sharded>
- </remote_servers>
- </yandex>
和3.3相比增加了分片和副本的信息,因为此处是2分片2副本
2)查看marcos和zookeeper
- 打开/etc/clickhouse-server/conf.d/
-
- root@chi-sample-01-cluster1-0-0-0:/etc/clickhouse-server/conf.d# ls
- chop-generated-macros.xml chop-generated-zookeeper.xml
- root@chi-sample-01-cluster1-0-0-0:/etc/clickhouse-server/conf.d# cat chop-generated-macros.xml
- <yandex>
- <macros>
- <installation>sample-01</installation>
- <all-sharded-shard>0</all-sharded-shard>
- <cluster>cluster1</cluster>
- <shard>0</shard>
- <replica>chi-sample-01-cluster1-0-0</replica>
- </macros>
- </yandex>
-
-
-
- root@chi-sample-01-cluster1-0-0-0:/etc/clickhouse-server/conf.d# cat chop-generated-zookeeper.xml
- <yandex>
- <zookeeper>
- <node>
- <host>zookeeper.ckk8s</host>
- <port>2181</port>
- </node>
- </zookeeper>
- <distributed_ddl>
- <path>/clickhouse/sample-01/task_queue/ddl</path>
- </distributed_ddl>
- </yandex>
如果在sample-2shard_2replica.yaml文件中zookeeper的写法如下:
即详细指出每一个zookeeper节点
- zookeeper:
- nodes:
- - host: zookeeper-0.zookeepers.ckk8s
- port: 2181
- - host: zookeeper-1.zookeepers.ckk8s
- port: 2181
- - host: zookeeper-2.zookeepers.ckk8s
- port: 2181
则/etc/clickhouse-server/conf.d/chop-generated-zookeeper.xml文件内的zookeeper信息为
- <yandex>
- <zookeeper>
- <node>
- <host>zookeeper-0.zookeepers.ckk8s</host>
- <port>2181</port>
- </node>
- <node>
- <host>zookeeper-1.zookeepers.ckk8s</host>
- <port>2181</port>
- </node>
- <node>
- <host>zookeeper-2.zookeepers.ckk8s</host>
- <port>2181</port>
- </node>
- </zookeeper>
- <distributed_ddl>
- <path>/clickhouse/demo-01/task_queue/ddl</path>
- </distributed_ddl>
- </yandex>
1)创建分布式表
- CREATE TABLE test AS system.one ENGINE = Distributed('cluster1', 'system', 'one')
-
- CREATE TABLE test AS system.one
- ENGINE = Distributed('cluster1', 'system', 'one')
-
- Query id: eab6dddb-3463-466f-aa8d-5d0cd3cea110
-
- Ok.
-
- 0 rows in set. Elapsed: 0.006 sec.
2)查询数据
- select * from test
-
- SELECT *
- FROM test
-
- Query id: 84a829f1-3ec7-424f-b1c5-44ca38a751b2
-
- ┌─dummy─┐
- │ 0 │
- └───────┘
- ┌─dummy─┐
- │ 0 │
- └───────┘
-
- 2 rows in set. Elapsed: 0.027 sec.
-
此时没有返回任何数据
3)查看哪些分片返回了数据
使用hostname()
- 第一次
- select hostName() from test;
-
-
-
- SELECT hostName()
- FROM test
-
- Query id: 3d5efc14-7b56-4142-a9c6-b3eee8693599
-
- ┌─hostName()───────────────────┐
- │ chi-sample-01-cluster1-0-1-0 │
- └──────────────────────────────┘
- ┌─hostName()───────────────────┐
- │ chi-sample-01-cluster1-1-0-0 │
- └──────────────────────────────┘
-
- 2 rows in set. Elapsed: 0.012 sec.
-
- /第2次
- select hostName() from test;
-
- SELECT hostName()
- FROM test
-
- Query id: 4e51ca0e-d4d8-47d5-9976-8271284a079a
-
- ┌─hostName()───────────────────┐
- │ chi-sample-01-cluster1-0-1-0 │
- └──────────────────────────────┘
- ┌─hostName()───────────────────┐
- │ chi-sample-01-cluster1-1-0-0 │
- └──────────────────────────────┘
-
- 2 rows in set. Elapsed: 0.007 sec.
-
- ///第3次
-
- select hostName() from test;
-
- SELECT hostName()
- FROM test
-
- Query id: b1cf71bd-a38f-4657-8323-8f868a21548b
-
- ┌─hostName()───────────────────┐
- │ chi-sample-01-cluster1-0-1-0 │
- └──────────────────────────────┘
- ┌─hostName()───────────────────┐
- │ chi-sample-01-cluster1-1-1-0 │
- └──────────────────────────────┘
-
- 2 rows in set. Elapsed: 0.007 sec.
运行了3次select,可以看到返回数据的分片。
问题在于:截止到目前的做法还没有持久存储。如果集群停止运行,所有的数据都会消失。
本部分介绍如何在k8s上的clickhouse集群添加持久存储。
1)创建sample-storage.yaml文件,内容如下:
- apiVersion: "clickhouse.altinity.com/v1"
- kind: "ClickHouseInstallation"
- metadata:
- name: "sample-01"
- spec:
- defaults:
- deployment:
- podTemplate: clickhouse-stable
- volumeClaimTemplate: storage-vc-template
- templates:
- podTemplates:
- - name: clickhouse-stable
- containers:
- - name: clickhouse
- image: yandex/clickhouse-server:latest
- volumeClaimTemplates:
- - name: storage-vc-template
- persistentVolumeClaim:
- metadata:
- name: storage-demo
- spec:
- storageClassName: standard
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: 1Gi
- configuration:
- zookeeper:
- nodes:
- - host: zookeeper.ckk8s
- port: 2181
- clusters:
- - name: "cluster1"
- layout:
- shardsCount: 2
- replicasCount: 2
其中:
2)部署
- kubectl -n ckk8s apply -f sample-storage.yaml
-
- clickhouseinstallation.clickhouse.altinity.com/sample-01 configured
3)查看pod
- kubectl get pod -n ckk8s
- NAME READY STATUS RESTARTS AGE
- chi-sample-01-cluster1-0-0-0 1/1 Running 0 58m
- chi-sample-01-cluster1-0-1-0 1/1 Running 0 56m
- chi-sample-01-cluster1-1-0-0 1/1 Running 0 56m
- chi-sample-01-cluster1-1-1-0 1/1 Running 0 56m
4)进入pod,测试持久存储
kubectl -n ckk8s exec -it chi-sample-01-cluster1-0-0-0 /bin/bash
建表
- CREATE TABLE events_local on cluster '{cluster}' (
- event_date Date,
- event_type Int32,
- article_id Int32,
- title String
- ) engine=ReplicatedMergeTree('/clickhouse/{installation}/{cluster}/tables/{shard}/{database}/{table}', '{replica}')
- PARTITION BY toYYYYMM(event_date)
- ORDER BY (event_type, article_id);
-
-
-
-
- CREATE TABLE events_local ON CLUSTER `{cluster}`
- (
- `event_date` Date,
- `event_type` Int32,
- `article_id` Int32,
- `title` String
- )
- ENGINE = ReplicatedMergeTree('/clickhouse/{installation}/{cluster}/tables/{shard}/{database}/{table}', '{replica}')
- PARTITION BY toYYYYMM(event_date)
- ORDER BY (event_type, article_id)
-
- Query id: 7bbbf3e1-a49a-4fa0-a045-4033a1e6a918
-
- ┌─host───────────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
- │ chi-sample-01-cluster1-0-0 │ 9000 │ 0 │ │ 3 │ 2 │
- │ chi-sample-01-cluster1-1-1 │ 9000 │ 0 │ │ 2 │ 2 │
- └────────────────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
- ┌─host───────────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
- │ chi-sample-01-cluster1-1-0 │ 9000 │ 0 │ │ 1 │ 0 │
- │ chi-sample-01-cluster1-0-1 │ 9000 │ 0 │ │ 0 │ 0 │
- └────────────────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
-
- 4 rows in set. Elapsed: 0.381 sec.
-
-
-
创建分布式表
- CREATE TABLE events on cluster '{cluster}' AS events_local
- ENGINE = Distributed('{cluster}', default, events_local, rand());
-
-
- CREATE TABLE events ON CLUSTER `{cluster}` AS events_local
- ENGINE = Distributed('{cluster}', default, events_local, rand())
-
- Query id: 81808a54-02ae-4b6f-8edb-9049d0283ac7
-
- ┌─host───────────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
- │ chi-sample-01-cluster1-1-0 │ 9000 │ 0 │ │ 3 │ 0 │
- │ chi-sample-01-cluster1-0-0 │ 9000 │ 0 │ │ 2 │ 0 │
- │ chi-sample-01-cluster1-1-1 │ 9000 │ 0 │ │ 1 │ 0 │
- │ chi-sample-01-cluster1-0-1 │ 9000 │ 0 │ │ 0 │ 0 │
- └────────────────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
-
- 4 rows in set. Elapsed: 0.143 sec.
5)插入数据
INSERT INTO events SELECT today(), rand()%3, number, 'my title' FROM numbers(100);
6)进入其他pod查看
- kubectl -n ckk8s exec -it chi-sample-01-cluster1-0-1-0 /bin/bash
-
- kubectl -n ckk8s exec -it chi-sample-01-cluster1-1-0-0 /bin/bash
-
- kubectl -n ckk8s exec -it chi-sample-01-cluster1-1-1-0 /bin/bash
将其改为2分片1副本,修改sample-storage.yaml,然后部署
kubectl apply -f sample-storage.yaml -n ckk8s
查看pod
- kubectl get pod -n ckk8s
-
- NAME READY STATUS RESTARTS AGE
- chi-sample-01-cluster1-0-0-0 1/1 Running 0 150m
- chi-sample-01-cluster1-0-1-0 1/1 Terminating 0 150m
- chi-sample-01-cluster1-1-0-0 1/1 Running 0 148m
- chi-sample-01-cluster1-1-1-0 1/1 Terminating 0 147m
-
- NAME READY STATUS RESTARTS AGE
- chi-sample-01-cluster1-0-0-0 1/1 Running 0 150m
- chi-sample-01-cluster1-1-0-0 1/1 Running 0 148m
- chi-sample-01-cluster1-1-1-0 0/1 Terminating 0 147m
-
- NAME READY STATUS RESTARTS AGE
- chi-sample-01-cluster1-0-0-0 1/1 Running 0 151m
- chi-sample-01-cluster1-1-0-0 1/1 Running 0 149m
发现在变少
进入这两个pod,发现表和数据还在
- chi-sample-01-cluster1-1-0-0.chi-sample-01-cluster1-1-0.ckk8s.svc.cluster.local :) show tables;
-
- SHOW TABLES
-
- Query id: d999a560-cb35-4de3-85ea-dd07182ad1a0
-
- ┌─name─────────┐
- │ events │
- │ events_local │
- └──────────────┘
-
- 2 rows in set. Elapsed: 0.003 sec.
-
- chi-sample-01-cluster1-1-0-0.chi-sample-01-cluster1-1-0.ckk8s.svc.cluster.local :) select count() from events;
-
- SELECT count()
- FROM events
-
- Query id: 5a33d098-239a-4ff9-96ee-1feb005b2e6f
-
- ┌─count()─┐
- │ 100 │
- └─────────┘
-
- 1 rows in set. Elapsed: 0.009 sec.
将其改为3分片2副本,修改sample-storage.yaml,然后部署
kubectl apply -f sample-storage.yaml -n ckk8s
查看pod
- kubectl get pod -n ckk8s
-
- NAME READY STATUS RESTARTS AGE
- chi-sample-01-cluster1-0-0-0 1/1 Running 0 154m
- chi-sample-01-cluster1-0-1-0 0/1 ContainerCreating 0 1s
- chi-sample-01-cluster1-1-0-0 1/1 Running 0 152m
-
- NAME READY STATUS RESTARTS AGE
- chi-sample-01-cluster1-0-0-0 1/1 Running 0 155m
- chi-sample-01-cluster1-0-1-0 1/1 Running 0 68s
- chi-sample-01-cluster1-1-0-0 1/1 Running 0 153m
- chi-sample-01-cluster1-1-1-0 1/1 Running 0 30s
- chi-sample-01-cluster1-2-0-0 0/1 ContainerCreating 0 1s
-
- NAME READY STATUS RESTARTS AGE
- chi-sample-01-cluster1-0-0-0 1/1 Running 0 158m
- chi-sample-01-cluster1-0-1-0 1/1 Running 0 3m56s
- chi-sample-01-cluster1-1-0-0 1/1 Running 0 156m
- chi-sample-01-cluster1-1-1-0 1/1 Running 0 3m18s
- chi-sample-01-cluster1-2-0-0 1/1 Running 0 2m49s
- chi-sample-01-cluster1-2-1-0 1/1 Running 0 2m27s
进入新的pod查看,发现表和数据被复制过去
- root@chi-sample-01-cluster1-2-1-0:/# clickhouse-client
- ClickHouse client version 21.8.5.7 (official build).
- Connecting to localhost:9000 as user default.
- Connected to ClickHouse server version 21.8.5 revision 54449.
-
- chi-sample-01-cluster1-2-1-0.chi-sample-01-cluster1-2-1.ckk8s.svc.cluster.local :) show tables;
-
- SHOW TABLES
-
- Query id: 9b7e443e-449b-4d0a-ab13-7e410e4ecc01
-
- ┌─name─────────┐
- │ events │
- │ events_local │ │
- └──────────────┘
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。