赞
踩
说明:其他连接器jdbc,kafka等等二次开发思路一致
推荐:公司基于flink开发内部平台,一些内部的特殊场景与需求,经常需要修改源码。但是修改源码在版本更新的情况下会导致开发成本大,周期长。本方案通过继承源码的方式,通过加强,打包覆盖源码的类解决上述问题。
elasticsearch6 为案例,部分pom参考官方提供的连接包,版本号对应,
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-sql-connector-elasticsearch6_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<properties> <maven.compiler.source>${java.version}</maven.compiler.source> <maven.compiler.target>${java.version}</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> <flink.version>1.13.1</flink.version> <scala.binary.version>2.11</scala.binary.version> <slf4j.version>1.7.15</slf4j.version> <sql.driver.version>8.0.21</sql.driver.version> <fastjson.version>1.2.75</fastjson.version> <google.guava.version>20.0</google.guava.version> <apache.commons.version>3.11</apache.commons.version> <cn.hutool.all.version>5.5.2</cn.hutool.all.version> <macasaet.version>1.5.0</macasaet.version> <!-- compile,provided --> <scope>provided</scope> </properties> <dependencies> <!-- flink --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-elasticsearch6_${scala.binary.version}</artifactId> <version>${flink.version}</version> </dependency> <!-- kafka --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId> <version>${flink.version}</version> <scope>${scope}</scope> <exclusions> <exclusion> <artifactId>slf4j-api</artifactId> <groupId>org.slf4j</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-csv</artifactId> <version>${flink.version}</version> <scope>${scope}</scope> </dependency> <dependency
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。