当前位置:   article > 正文

flink on k8s(minikube) session模式部署(HA)_flink local session mode 配置ha

flink local session mode 配置ha

安装 kubectl

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

chmod +x ./kubectl

sudo mv ./kubectl /usr/local/bin/kubectl

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

1.安装minikube

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
  && chmod +x minikube
  • 1
  • 2

2.启动minikube

minikube start --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.15.0
  • 1

k8s相关指令操作

kubectl get pods --all-namespaces
kubectl get pods -A

kubectl describe pod ${podName}
#进入pod容器
kubectl exec -ti <your-pod-name>  -n <your-namespace>  -- /bin/sh

#查看指定分区的pod
 kubectl get pod -n flink
 #查看创建的service
 kubectl get service -n flink
 #修改创建的pod配置信息
 kubectl edit svc -n ding-flink-test flink-jobmanager
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

3.部署flink集群

验证minikube 信息

minikube ssh 'sudo ip link set docker0 promisc on'
  • 1

创建命名空间

kubectl create -f namespace.yaml namespace/flink created
  • 1

其中namespace.yaml文件为:

kind: Namespace
apiVersion: v1
metadata:
	name: flink
    labels:
    	name: flink
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

查询minikube集群的的命名空间:

# kubectl get namespaces
NAME          STATUS    AGE
flink         Active    1m
kube-public   Active    254d
kube-system   Active    254d
  • 1
  • 2
  • 3
  • 4
  • 5

创建 flink-conf/flink-jobmanager/task-manager
(yaml详情信息见文章附录)

kubectl create -f flink-configuration-configmap.yaml
kubectl create -f jobmanager-service.yaml
kubectl create -f jobmanager-deployment.yaml
kubectl create -f taskmanager-deployment.yaml

  • 1
  • 2
  • 3
  • 4
  • 5

4.做pod端口代理转发到本地

kubectl port-forward service/flink-jobmanager 8081:8081
  • 1

查看服务启动信息

kubectl get svc
  • 1

您可以通过不同的方式访问Flink UI:

./bin/flink run -m localhost:8081 ./examples/streaming/WordCount.jar
  • 1
  • NodePort在jobmanager的其余服务上创建服务:
    1. 运行kubectl create -f jobmanager-rest-service.yamlNodePort在jobmanager上创建服务。的示例jobmanager-rest-service.yaml可以在附录中找到。
    2. 运行kubectl get svc flink-jobmanager-rest以了解node-port该服务的,并在浏览器中导航到 http://: .
    3. port-forward解决方案类似,您还可以使用以下命令将作业提交到集群:
./bin/flink run -m <public-node-ip>:<node-port> ./examples/streaming/WordCount.jar
  • 1

5.终止指令:

kubectl delete -f jobmanager-deployment.yaml
kubectl delete -f taskmanager-deployment.yaml
kubectl delete -f jobmanager-service.yaml
kubectl delete -f flink-configuration-configmap.yaml
  • 1
  • 2
  • 3
  • 4

附录:flink创建及启动yaml详情

flink-configuration-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: flink-config
  labels:
    app: flink
data:
  flink-conf.yaml: |+
    jobmanager.rpc.address: flink-jobmanager
    taskmanager.numberOfTaskSlots: 1
    blob.server.port: 6124
    jobmanager.rpc.port: 6123
    taskmanager.rpc.port: 6122
    jobmanager.heap.size: 1024m
    taskmanager.heap.size: 1024m
  log4j.properties: |+
    log4j.rootLogger=INFO, file
    log4j.logger.akka=INFO
    log4j.logger.org.apache.kafka=INFO
    log4j.logger.org.apache.hadoop=INFO
    log4j.logger.org.apache.zookeeper=INFO
    log4j.appender.file=org.apache.log4j.FileAppender
    log4j.appender.file.file=${log.file}
    log4j.appender.file.layout=org.apache.log4j.PatternLayout
    log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
    log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, file
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

jobmanager-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: flink-jobmanager
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: flink
        component: jobmanager
    spec:
      containers:
      - name: jobmanager
        image: flink:1.8.2
        workingDir: /opt/flink
        command: ["/bin/bash", "-c", "$FLINK_HOME/bin/jobmanager.sh start;\
          while :;
          do
            if [[ -f $(find log -name '*jobmanager*.log' -print -quit) ]];
              then tail -f -n +1 log/*jobmanager*.log;
            fi;
          done"]
        ports:
        - containerPort: 6123
          name: rpc
        - containerPort: 6124
          name: blob
        - containerPort: 8081
          name: ui
        livenessProbe:
          tcpSocket:
            port: 6123
          initialDelaySeconds: 30
          periodSeconds: 60
        volumeMounts:
        - name: flink-config-volume
          mountPath: /opt/flink/conf
      volumes:
      - name: flink-config-volume
        configMap:
          name: flink-config
          items:
          - key: flink-conf.yaml
            path: flink-conf.yaml
          - key: log4j.properties
            path: log4j.properties
      hostAliases:
      - ip: "192.168.66.192"
        hostnames:
        - "cdh-master"
      - ip: "192.168.66.193"
        hostnames:
        - "cdh-slave1"
      - ip: "192.168.66.194"
        hostnames:
        - "cdh-slave2"
      - ip: "192.168.66.195"
        hostnames:
        - "cdh-slave3"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60

taskmanager-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: flink-taskmanager
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: flink
        component: taskmanager
    spec:
      containers:
      - name: taskmanager
        image: flink:1.8.2
        workingDir: /opt/flink
        command: ["/bin/bash", "-c", "$FLINK_HOME/bin/taskmanager.sh start; \
          while :;
          do
            if [[ -f $(find log -name '*taskmanager*.log' -print -quit) ]];
              then tail -f -n +1 log/*taskmanager*.log;
            fi;
          done"]
        ports:
        - containerPort: 6122
          name: rpc
        livenessProbe:
          tcpSocket:
            port: 6122
          initialDelaySeconds: 30
          periodSeconds: 60
        volumeMounts:
        - name: flink-config-volume
          mountPath: /opt/flink/conf/
      volumes:
      - name: flink-config-volume
        configMap:
          name: flink-config
          items:
          - key: flink-conf.yaml
            path: flink-conf.yaml
          - key: log4j.properties
            path: log4j.properties
      hostAliases:
      - ip: "192.168.66.192"
        hostnames:
        - "cdh-master"
      - ip: "192.168.66.193"
        hostnames:
        - "cdh-slave1"
      - ip: "192.168.66.194"
        hostnames:
        - "cdh-slave2"
      - ip: "192.168.66.195"
        hostnames:
        - "cdh-slave3"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56

jobmanager-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: flink-jobmanager
spec:
  type: ClusterIP
  ports:
  - name: rpc
    port: 6123
  - name: blob
    port: 6124
  - name: ui
    nodePort: 30080
    port: 8081
    protocol: TCP
    targetPort: 8081
  selector:
    app: flink
    component: jobmanager
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

jobmanager-rest-service.yaml

(可选服务,将jobmanager rest端口公开为公共Kubernetes节点的端口)

apiVersion: v1
kind: Service
metadata:
  name: flink-jobmanager-rest
spec:
  type: NodePort
  ports:
  - name: rest
    port: 8081
    targetPort: 8081
  selector:
    app: flink
    component: jobmanager
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

host-edit.yaml

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod #pod名称
spec:
  hostAliases:
  - ip: "192.168.66.192"
    hostnames:
    - "cdh-master"
  - ip: "192.168.66.193"
    hostnames:
    - "cdh-slave1"
  - ip: "192.168.66.194"
    hostnames:
    - "cdh-slave2"
  - ip: "192.168.66.195"
    hostnames:
    - "cdh-slave3"
  containers:
  - name: cat-hosts
    image: flink:1.8.2
    command:
    - cat
    args:
    - "/etc/hosts"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Guff_9hys/article/detail/764779
推荐阅读
相关标签
  

闽ICP备14008679号