赞
踩
从 Elasticsearch 7 开始不推荐使用 TransportClient,并将在 Elasticsearch 8 中将其删除。Spring Data Elasticsearch 也支持 TransportClient,前提是使用的 Elasticsearch 中可用,Spring Data Elasticsearch 从 4.0 版本开始已弃用使用 TransportClient 的类。现在 High Level REST Client 是 Elasticsearch 的默认客户端,它在接受并返回完全相同的请求/响应对象时直接替代 TransportClient。
ElasticsearchRestTemplate 是 Spring Data Elasticsearch 项目中的一个类,和其他 spring 项目中的 template 类似。在新版的 Spring Data Elasticsearch 中,ElasticsearchRestTemplate 代替了原来的 ElasticsearchTemplate。原因是 ElasticsearchTemplate 基于 TransportClient,TransportClient 即将在 8.x 以后的版本中移除。ElasticsearchRestTemplate 基于 RestHighLevelClient,如果不手动配置 ElasticsearchRestTemplate 将使用默认配置的 RestHighLevelClientbaen,此时 ES 服务器应当使用默认 9200 端口。
首先说一下,es的版本号很重要,版本号不对,各种失败。我是先装的es,kibana,后新建的项目,结果启动报错,日志写着用高版本es,直接重装了又。
我的spring-boot-starter-parent是2.4.5版本,对应的es是7.9.3,启动的时候会有日志显示。
版本如何选择呢?
比如说springboot版本选择为
<version>2.3.0.RELEASEversion>
然后在项目的External Libraries中搜索elasticsearch,可以发现elasticsearch-7.6.2.jar这个依赖;
打开其中的MANIFEST.MF文件,通过jar包中的X-Compile-Elasticsearch-Version属性,我们可以找到兼容的Elasticsearch版本号为7.6.2;
还有一点值得注意的是,如果你使用了中文分词器(IK Analysis),也要选择对应的版本7.6.2,对于使用Kibana和Logstash也是如此。
参考项目源码地址
https://github.com/macrozheng/mall-learning/tree/master/mall-tiny-elasticsearch
<properties>
<java.version>1.8</java.version>
<elasticsearch.version>7.6.1</elasticsearch.version>
</properties>
<dependencies>
<!-- 解析网页jsoup-->
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.13.1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.66</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
spring:
elasticsearch:
rest:
uris: 127.0.0.1:9200
如下配置也可
spring:
data:
elasticsearch:
client:
reactive:
endpoints: 127.0.0.1:9200
SpringBoot 有为我们提供多种方式设置mapping,你可以按喜好选择使用,使用@Mapping注解配置,使用es原生的方式进行设置,虽然有点小麻烦,但是更加直观了,也不仅限于java,也可以直接用curl或es控制台创建。
film-mapping.json
@Data
@AllArgsConstructor
@NoArgsConstructor
@Document(indexName = "my_book")
public class Book {
@Id
private Long id;
@Field(type = FieldType.Text, analyzer = "ik_smart")
private String title;
@Field(type = FieldType.Keyword)
private String author;
@Field(type = FieldType.Text, analyzer = "ik_smart")
private String desc;
@Field(type = FieldType.Integer)
private Integer pageNum;
@Field(type = FieldType.Date)
private Date createDate;
@Document(indexName = "film-entity", type = "film")
@Setting(settingPath = "/json/film-setting.json")
@Mapping(mappingPath = "/json/film-mapping.json")
public class FilmEntity {
@Id
private Long id;
// @Field(type = FieldType.Text, searchAnalyzer = "ik_max_word", analyzer = "ik_smart")
private String name;
private String nameOri;
private String publishDate;
private String type;
private String language;
private String fileDuration;
private String director;
// @Field(type = FieldType.Date)
private Date created ;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getNameOri() {
return nameOri;
}
public void setNameOri(String nameOri) {
this.nameOri = nameOri;
}
public String getPublishDate() {
return publishDate;
}
public void setPublishDate(String publishDate) {
this.publishDate = publishDate;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
public String getLanguage() {
return language;
}
public void setLanguage(String language) {
this.language = language;
}
public String getFileDuration() {
return fileDuration;
}
public void setFileDuration(String fileDuration) {
this.fileDuration = fileDuration;
}
public String getDirector() {
return director;
}
public void setDirector(String director) {
this.director = director;
}
public Date getCreated() {
return created;
}
public void setCreated(Date created) {
this.created = created;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
@Override
public String toString() {
return "FilmEntity [id=" + id + ", name=" + name + ", director=" + director + "]";
}
}
{
"film": {
"_all": {
"enabled": true
},
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "text",
"analyzer": "ikSearchAnalyzer",
"search_analyzer": "ikSearchAnalyzer",
"fields": {
"pinyin": {
"type": "text",
"analyzer": "pinyinSimpleIndexAnalyzer",
"search_analyzer": "pinyinSimpleIndexAnalyzer"
}
}
},
"nameOri": {
"type": "text"
},
"publishDate": {
"type": "text"
},
"type": {
"type": "text"
},
"language": {
"type": "text"
},
"fileDuration": {
"type": "text"
},
"director": {
"type": "text",
"index": "true",
"analyzer": "ikSearchAnalyzer"
},
"created": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
}
}
}
}
另外,除了@Mapping,SpringBoot还为我们提供了另一强大的注解@Setting,该注解可以让我们为当前索引设置一些相关属性,相当于
elasticsearch中的settings配置,例如:
film-setting.json
(IK分词+拼音分词的配置)
{
"index": {
"analysis": {
"filter": {
"edge_ngram_filter": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 50
},
"pinyin_simple_filter": {
"type": "pinyin",
"first_letter": "prefix",
"padding_char": " ",
"limit_first_letter_length": 50,
"lowercase": true
}
},
"char_filter": {
"tsconvert": {
"type": "stconvert",
"convert_type": "t2s"
}
},
"analyzer": {
"ikSearchAnalyzer": {
"type": "custom",
"tokenizer": "ik_max_word",
"char_filter": [
"tsconvert"
]
},
"pinyinSimpleIndexAnalyzer": {
"tokenizer": "keyword",
"filter": [
"pinyin_simple_filter",
"edge_ngram_filter",
"lowercase"
]
}
}
}
}
}
上面的JSON作用是创建两个分析器名为ikSearchAnalyzer,pinyinSimpleIndexAnalyzer,前者使用ik中文分词器加繁体转简体char_filter过滤,使得引用此分词器的字段在设置时,将会自动对中文进行分词和繁简体转换。
pinyinSimpleIndexAnalyzer 使用pinyin分词器,并进行edge_ngram 过滤,大写转小写过滤
注解 |
说明 |
---|---|
@Document |
作用在类,标记实体类为文档对象 indexName:对应索引库名称 type:对应在索引库中的类型,8.x 将删除 shards:分片数量,默认 5 replicas:副本数量,默认 1 |
@Id |
作用在成员变量,标记一个字段作为 id 主键 |
@Field |
作用在成员变量,标记为文档的字段,并指定字段映射属性: type:字段类型,是枚举:FieldType,可以是 text、long、short、date、integer、object 等 index:是否索引,布尔类型,默认是true store:是否存储,布尔类型,默认是 false analyzer:分词器名称 |
@SpringBootTest
public class SpringDataEsTest {
@Autowired
private ElasticsearchRestTemplate elasticsearchRestTemplate;
@Test
public void create() {
// 根据 @Document 注解创建索引
elasticsearchRestTemplate.createIndex(Book.class);
// 更具 @Field 注解创建 Mapping
elasticsearchRestTemplate.putMapping(Book.class);
}
@Test
public void delete() {
// 根据字节码删除
elasticsearchRestTemplate.deleteIndex(Book.class);
// 根据索引名删除
elasticsearchRestTemplate.deleteIndex("my_book");
}
}
这些操作已经不被推荐使用,已经加上删除线了。这些操作其实是 ElasticsearchTemplate 的过度,在 ElasticsearchRestTemplate 中不需要我们自己去创建索引,首次创建 ElasticsearchRestTemplate 时会帮我们自动根据实体类来创建索引。
Document这个是属于es包下面的注解,indexName就是索引名称,默认createIndex=true,即没有该索引,会创建。
@SpringBootTest
public class SpringDataEsTest {
@Autowired
private ElasticsearchRestTemplate elasticsearchRestTemplate;
@Test
public void save() {
// 准备数据,id 相同即为修改
Book book = new Book(Long.parseLong("1"), "民法典", "人大", "666666", 100, new Date());
elasticsearchRestTemplate.save(book);
}
}
@SpringBootTest
public class SpringDataEsTest {
@Autowired
private ElasticsearchRestTemplate elasticsearchRestTemplate;
@Test
public void delete() {
// 准备数据
Book book = new Book(Long.parseLong("1"), "民法典", "人大", "666", 100, new Date());
// 根据对象删除数据
elasticsearchRestTemplate.delete(book);
// 根据 id + Class 删除数据
elasticsearchRestTemplate.delete("1", Book.class);
}
}
@RunWith(SpringRunner.class)
@SpringBootTest
public class EsArticleControllerTest {
@Autowired
private ElasticsearchRestTemplate elasticsearchRestTemplate;
@Test
public void test1() {
NativeSearchQuery nativeSearchQuery = new NativeSearchQueryBuilder()
//查询条件
.withQuery(QueryBuilders.queryStringQuery("浦东开发开放").defaultField("title"))
//分页
.withPageable(PageRequest.of(0, 5))
//排序
.withSort(SortBuilders.fieldSort("id").order(SortOrder.DESC))
//高亮字段显示
.withHighlightFields(new HighlightBuilder.Field("浦东"))
.build();
List<ArticleEntity> articleEntities = elasticsearchRestTemplate.queryForList(nativeSearchQuery, ArticleEntity.class);
articleEntities.forEach(item -> System.out.println(item.toString()));
}
}
这个方法是根据指定的title模糊查询一个列表,其中用到了几个关键类,说明一下:
elasticsearchRestTemplate.queryForList是查询一个列表,用的就是ElasticsearchRestTemplate的一个对象实例;
NativeSearchQuery :是springdata中的查询条件;
NativeSearchQueryBuilder :用于建造一个NativeSearchQuery查询对象;
QueryBuilders :设置查询条件,是ES中的类;
SortBuilders :设置排序条件;
HighlightBuilder :设置高亮显示;
下面分类具体介绍下。
这是一个原生的查询条件类,用来和ES的一些原生查询方法进行搭配,实现一些比较复杂的查询。
//查询条件,查询的时候,会考虑关键词的匹配度,并按照分值进行排序
private QueryBuilder query;
//查询条件,查询的时候,不考虑匹配程度以及排序这些事情
private QueryBuilder filter;
//排序条件的builder
private List<SortBuilder> sorts;
private final List<ScriptField> scriptFields = new ArrayList<>();
private CollapseBuilder collapseBuilder;
private List<FacetRequest> facets;
private List<AbstractAggregationBuilder> aggregations;
//高亮显示的builder
private HighlightBuilder highlightBuilder;
private HighlightBuilder.Field[] highlightFields;
private List<IndexBoost> indicesBoost;
内部属性,基本上都是ES的一些内部对象:
QueryBuilders是ES中的查询条件构造器。
精确,指的是查询关键字(或者关键字分词后),必须与目标分词结果完全匹配。
1.指定字符串作为关键词查询,关键词支持分词
//查询title字段中,包含 ”开发”、“开放" 这个字符串的document;相当于把"浦东开发开放"分词了,再查询;
QueryBuilders.queryStringQuery(“开发开放”).defaultField(“title”);
//不指定feild,查询范围为所有feild
QueryBuilders.queryStringQuery(“青春”);
//指定多个feild
QueryBuilders.queryStringQuery(“青春”).field(“title”).field(“content”);
2.以关键字“开发开放”,关键字不支持分词
QueryBuilders.termQuery(“title”, “开发开放”)
QueryBuilders.termsQuery(“fieldName”, “fieldlValue1”,“fieldlValue2…”)
3.以关键字“开发开放”,关键字支持分词
QueryBuilders.matchQuery(“title”, “开发开放”)
QueryBuilders.multiMatchQuery(“fieldlValue”, “fieldName1”, “fieldName2”, “fieldName3”)
模糊,是指查询关键字与目标关键字可以模糊匹配。
1.左右模糊查询,其中fuzziness的参数作用是在查询时,es动态的将查询关键词前后增加或者删除一个词,然后进行匹配
QueryBuilders.fuzzyQuery(“title”, “开发开放”).fuzziness(Fuzziness.ONE)
2.前缀查询,查询title中以“开发开放”为前缀的document;
QueryBuilders.prefixQuery(“title”, “开发开放”)
3.通配符查询,支持*和?,?表示单个字符;注意不建议将通配符作为前缀,否则导致查询很慢
QueryBuilders.wildcardQuery(“title”, “开*放”)
QueryBuilders.wildcardQuery(“title”, “开?放”)
注意,
在分词的情况下,针对fuzzyQuery、prefixQuery、wildcardQuery不支持分词查询,即使有这种doucment数据,也不一定能查出来,因为分词后,不一定有“开发开放”这个词;
//闭区间查询
QueryBuilders.rangeQuery(“fieldName”).from(“fieldValue1”).to(“fieldValue2”);
//开区间查询,默认是true,也就是包含
QueryBuilders.rangeQuery(“fieldName”).from(“fieldValue1”).to(“fieldValue2”).includeUpper(false).includeLower(false);
//大于
QueryBuilders.rangeQuery(“fieldName”).gt(“fieldValue”);
//大于等于
QueryBuilders.rangeQuery(“fieldName”).gte(“fieldValue”);
//小于
QueryBuilders.rangeQuery(“fieldName”).lt(“fieldValue”);
//小于等于
QueryBuilders.rangeQuery(“fieldName”).lte(“fieldValue”);
QueryBuilders.boolQuery()
QueryBuilders.boolQuery().must();//文档必须完全匹配条件,相当于and
QueryBuilders.boolQuery().mustNot();//文档必须不匹配条件,相当于not
QueryBuilders.boolQuery().should();//至少满足一个条件,这个文档就符合should,相当于or
具体demo如下:
public void testBoolQuery() {
NativeSearchQuery nativeSearchQuery = new NativeSearchQueryBuilder()
.withQuery(QueryBuilders.boolQuery()
.should(QueryBuilders.termQuery(“title”, “开发”))
.should(QueryBuilders.termQuery(“title”, “青春”))
.mustNot(QueryBuilders.termQuery(“title”, “潮头”))
)
.withSort(SortBuilders.fieldSort(“id”).order(SortOrder.DESC))
.withPageable(PageRequest.of(0, 50))
.build();
List articleEntities = elasticsearchRestTemplate.queryForList(nativeSearchQuery, ArticleEntity.class);
articleEntities.forEach(item -> System.out.println(item.toString()));
}
以上是查询title分词中,包含“开发”或者“青春”,但不能包含“潮头”的document;
也可以多个must组合。
上述demo中,我们使用了排序条件:
//按照id字段降序
.withSort(SortBuilders.fieldSort(“id”).order(SortOrder.DESC))
注意排序时,有个坑,就是在以id排序时,比如降序,结果可能并不是我们想要的。因为根据id排序,es实际上会根据_id进行排序,但是_id是string类型的,排序后的结果会与整型不一致。
建议:
在创建es的索引mapping时,将es的id和业务的id分开,比如业务id叫做myId:
@Id
@Field(type = FieldType.Long, store = true)
private Long myId;
@Field(type = FieldType.Text, store = true, analyzer = “ik_smart”)
private String title;
@Field(type = FieldType.Text, store = true, analyzer = “ik_smart”)
private String content;
这样,后续排序可以使用myId进行排序。
使用如下方式分页:
@Test
public void testPage() {
NativeSearchQuery nativeSearchQuery = new NativeSearchQueryBuilder()
.withQuery(QueryBuilders.matchQuery(“title”, “青春”))
.withSort(SortBuilders.fieldSort(“myId”).order(SortOrder.DESC))
.withPageable(PageRequest.of(0, 50))
.build();
AggregatedPage page = elasticsearchRestTemplate.queryForPage(nativeSearchQuery, ArticleEntity.class);
List articleEntities = page.getContent();
articleEntities.forEach(item -> System.out.println(item.toString()));
}
注意,如果不指定分页参数,es默认只显示10条。
查询title字段中的关键字,并高亮显示:
@Test
public void test() {
String preTag = “”;
String postTag = “”;
NativeSearchQuery nativeSearchQuery = new NativeSearchQueryBuilder()
.withQuery(QueryBuilders.matchQuery(“title”, “开发”))
.withPageable(PageRequest.of(0, 50))
.withSort(SortBuilders.fieldSort(“id”).order(SortOrder.DESC))
.withHighlightFields(new HighlightBuilder.Field(“title”).preTags(preTag).postTags(postTag))
.build();
AggregatedPage<ArticleEntity> page = elasticsearchRestTemplate.queryForPage(nativeSearchQuery, ArticleEntity.class,
new SearchResultMapper() {
@Override
public <T> AggregatedPage<T> mapResults(SearchResponse response, Class<T> clazz, Pageable pageable) {
List<ArticleEntity> chunk = new ArrayList<>();
for (SearchHit searchHit : response.getHits()) {
if (response.getHits().getHits().length <= 0) {
return null;
}
ArticleEntity article = new ArticleEntity();
article.setMyId(Long.valueOf(searchHit.getSourceAsMap().get("id").toString()));
article.setContent(searchHit.getSourceAsMap().get("content").toString());
HighlightField title = searchHit.getHighlightFields().get("title");
if (title != null) {
article.setTitle(title.fragments()[0].toString());
}
chunk.add(article);
}
if (chunk.size() > 0) {
return new AggregatedPageImpl<>((List<T>) chunk);
}
return null;
}
@Override
public <T> T mapSearchHit(SearchHit searchHit, Class<T> type) {
return null;
}
});
List<ArticleEntity> articleEntities = page.getContent();
articleEntities.forEach(item -> System.out.println(item.toString()));
}
@SpringBootTest
public class SpringDataEsTest {
@Autowired
private ElasticsearchRestTemplate elasticsearchRestTemplate;
@Test
public void search() {
/*
* 设置 bool 查询
* ① 设置查询 BoolQueryBuilder
* ② 关键词 must(AND), mustNot(NOT), should(OR)
* ③ 查询条件 MatchQueryBuilder 分词查询, TermQueryBuilder 不分词查询
*/
BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder().must(new MatchQueryBuilder("title", "民法典"));
/*
* 设置总查询
* ① 设置查询 NativeSearchQueryBuilder
* ② 设置查询条件 withQuery(BoolQueryBuilder boolQueryBuilder)
* ③ 设置高亮 withHighlightFields(new HighlightBuilder.Field("name").preTags(preTag).postTags(postTag))
*/
NativeSearchQuery nativeSearchQuery = new NativeSearchQueryBuilder().withQuery(boolQueryBuilder).build();
// 查询
SearchHits<Book> searchHits = elasticsearchRestTemplate.search(nativeSearchQuery, Book.class);
// 遍历查询结果
for (SearchHit<Book> searchHit : searchHits) {
Book book = searchHit.getContent();
System.out.println(book);
}
}
}
@GetMapping("/search")
public String search() {
// 查询全部数据
// QueryBuilder queryBuilder = QueryBuilders.matchAllQuery();
// 精确查询 =
// QueryBuilder queryBuilder = QueryBuilders.termQuery("name", "lisi");
// 精确查询 多个 in
// QueryBuilder queryBuilder = QueryBuilders.termsQuery("name", "张三", "lisi");
// match匹配,会把查询条件进行分词,然后进行查询,多个词条之间是 or 的关系,可以指定分词
// QueryBuilder queryBuilder = QueryBuilders.matchQuery("name", "张三");
// QueryBuilder queryBuilder = QueryBuilders.matchQuery("name", "张三").analyzer("ik_max_word");
// match匹配 查询多个字段
// QueryBuilder queryBuilder = QueryBuilders.multiMatchQuery("男", "name", "sex");
// fuzzy 模糊查询,返回包含与搜索字词相似的字词的文档。
// QueryBuilder queryBuilder = QueryBuilders.fuzzyQuery("name","lisx");
// prefix 前缀检索
// QueryBuilder queryBuilder = QueryBuilders.prefixQuery("name","张");
// wildcard 通配符检索
// QueryBuilder queryBuilder = QueryBuilders.wildcardQuery("name","张*");
// regexp 正则查询
QueryBuilder queryBuilder = QueryBuilders.regexpQuery("name", "(张三)|(lisi)");
// boost 评分权重,令满足某个条件的文档的得分更高,从而使得其排名更靠前。
queryBuilder.boost(2);
// 多条件构建
// BoolQueryBuilder queryBuilder = QueryBuilders.boolQuery();
// 并且 and
// queryBuilder.must(QueryBuilders.termQuery("name", "张三"));
// queryBuilder.must(QueryBuilders.termQuery("sex", "女"));
// 或者 or
// queryBuilder.should(QueryBuilders.termQuery("name", "张三"));
// queryBuilder.should(QueryBuilders.termQuery("name", "lisi"));
// 不等于,去除
// queryBuilder.mustNot(QueryBuilders.termQuery("name", "lisi"));
// 过滤数据
// queryBuilder.filter(QueryBuilders.matchQuery("name", "张三"));
// 范围查询
/*
gt 大于 >
gte 大于等于 >=
lt 小于 <
lte 小于等于 <=
*/
// queryBuilder.filter(new RangeQueryBuilder("age").gt(10).lte(50));
// 构建分页,page 从0开始
Pageable pageable = PageRequest.of(0, 3);
Query query = new NativeSearchQueryBuilder()
.withQuery(queryBuilder)
.withPageable(pageable)
//排序
.withSort(SortBuilders.fieldSort("_score").order(SortOrder.DESC))
//投影
.withFields("name")
.build();
SearchHits<UserEsEntity> search = elasticsearchRestTemplate.search(query, UserEsEntity.class);
log.info("total: {}", search.getTotalHits());
Stream<SearchHit<UserEsEntity>> searchHitStream = search.get();
List<UserEsEntity> list = searchHitStream.map(SearchHit::getContent).collect(Collectors.toList());
log.info("结果数量:{}", list.size());
list.forEach(entity -> {
log.info(entity.toString());
});
return "success";
}
@Autowired
private ElasticsearchRestTemplate elasticsearchRestTemplate;
@Test
/** 搜索全部数据 , 分页显示 , 按 balance字段降序 排序 */
public void test1() {
// 构建查询条件(搜索全部)
MatchAllQueryBuilder queryBuilder1 = QueryBuilders.matchAllQuery();
// 分页
Pageable pageable = PageRequest.of(0, 5);
// 排序
FieldSortBuilder balance = new FieldSortBuilder("balance").order(SortOrder.DESC);
// 执行查询
NativeSearchQuery query = new NativeSearchQueryBuilder()
.withQuery(queryBuilder1)
.withPageable(pageable)
.withSort(balance)
.build();
SearchHits<Book> searchHits = elasticsearchRestTemplate.search(query, Book.class);
//封装page对象
List<EsAccount> accounts = new ArrayList<>();
for (SearchHit<Book> hit : searchHits) {
accounts.add(hit.getContent());
}
Page<EsAccount> page = new PageImpl<>(accounts,pageable,searchHits.getTotalHits());
//输出分页对象
System.out.println(page.getTotalPages());
System.out.println(page.getTotalElements());
}
@Test
/** 组合搜索 bool*/
public void test3() {
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
// must表示同时满足,should满足其中一个,must_not表示同时不满足
boolQueryBuilder.must(QueryBuilders.matchQuery("address", "mill"));
boolQueryBuilder.must(QueryBuilders.matchQuery("address", "lane"));
NativeSearchQuery query = new NativeSearchQueryBuilder()
.withQuery(boolQueryBuilder)
.build();
SearchHits<EsAccount> searchHits = elasticsearchRestTemplate.search(query, EsAccount.class);
for (SearchHit<EsAccount> hit : searchHits) {
System.out.println(hit.getContent());
}
}
@Test
/** 过滤搜索 */
public void test4() {
// 构建条件
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
RangeQueryBuilder balance = QueryBuilders.rangeQuery("balance").gte(20000).lte(30000);
boolQueryBuilder.filter(balance);
NativeSearchQuery query = new NativeSearchQueryBuilder()
.withQuery(boolQueryBuilder)
.build();
SearchHits<EsAccount> searchHits = elasticsearchRestTemplate.search(query, EsAccount.class);
for (SearchHit<EsAccount> hit : searchHits) {
System.out.println(hit.getContent());
}
}
前文讲了 ElasticsearchRestTemplate 的简单操作,还有一种是使用 ElasticsearchRepository 它的用法与 SringDataJpa 十分类似。我们只需要写一个 repository 接口继承它就可以使用以下方法去操作 ES。
@Repository
public interface BookRepository extends ElasticsearchRepository<Book, Long> {
}
@SpringBootTest
public class SpringDataEsTest {
@Autowired
private BookRepository bookRepository;
@Test
public void save() {
Book book = new Book(Long.parseLong("1"), "斗破苍穹", "天蚕土豆", "斗气的世界", 100, new Date());
bookRepository.save(book);
}
@Test
public void exist() {
System.out.println(bookRepository.existsById(Long.parseLong("4")));
}
@Test
public void delete() {
bookRepository.deleteById(Long.parseLong("4"));
}
@Test
public void findAll() {
Iterable<Book> books = bookRepository.findAll();
for (Book book : books) {
System.out.println(book);
}
}
}
/**
* @Description: 搜索微服务SkuService接口实现类
*/
@Service
public class SkuEsServiceImpl implements SkuEsService {
@Autowired
private SkuEsMapper skuEsMapper;
@Autowired
private SkuFeign skuFeign;
@Autowired
private ElasticsearchRestTemplate elasticsearchRestTemplate;
@Autowired
private RestHighLevelClient restHighLevelClient;
/**
* 将数据库中的全部Sku数据导入ES中
*/
@Override
public void importDataToElasticSearch(int start, int end) {
// 调用Feign,查询List<Sku>
Result<List<Sku>> skuListResult = skuFeign.findAll(start, end);
// 将List<Sku>转成List<SkuInfo>
// JSON.toJSONString(skuListResult.getData()): 将skuListResult中的List<Sku>转成json格式
// JSON.parseArray(): 将json格式的List<Sku>转成List<SkuInfo>集合
List<SkuInfo> skuInfoList = JSON.parseArray(JSON.toJSONString(skuListResult.getData()), SkuInfo.class);
// 遍历当前skuInfoList
for (SkuInfo skuInfo : skuInfoList) {
// 获取spec(String类型的Map数据) -> 将其转成 Map类型 ->{'颜色': '梵高星空典藏版', '版本': '8GB+128GB'}
Map<String, Object> specMap = JSON.parseObject(skuInfo.getSpec(), Map.class);
// 给skuInfo中的specMap属性赋值
// 当前Map<String,Object> 的值Object会被作为Sku对象该域(key)对应的值
skuInfo.setSpecMap(specMap);
}
// 调用Mapper 实现数据批量导入
skuEsMapper.saveAll(skuInfoList);
}
/**
* 关键字检索(优化后的搜索方法)
*
* @param searchMap
* @return
*/
@Override
public Map<String, Object> search(Map<String, String> searchMap) {
// 1.封装检索条件(后期有多个检索条件,专门封装一个方法)
NativeSearchQueryBuilder builder = builderBasicQuery(searchMap);
// 2.根据关键字检索,获取改关键字下的商品信息
Map<String, Object> resultMap = searchForPage(builder);
// 3.商品分类列表
// List<String> categoryList = searchCategoryList(builder);
// resultMap.put("categoryList", categoryList);
// 4.品牌分类列表查询
// List<String> brandList = searchBrandList(builder);
// resultMap.put("brandList", brandList);
// 获取数据的总条数
String totalElements = resultMap.get("TotalElements").toString();
int totalSize = Integer.parseInt(totalElements);
if (totalSize <= 0) {
//判断totalSize是否小于等于0,如果小于等于0会报角标越界异常,需要给totalSize设置默认值防止报错
totalSize = 10000;
}
// 5.统计规格分类列表
// Map<String, Set<String>> specList = searchSpecList(builder,totalSize);
// resultMap.put("specList", specList);
// 6.将检索的结果封装到map中
Map<String, Object> map = searchGroupList(builder, totalSize);
resultMap.putAll(map);
return resultMap;
}
/**
* 统计规格分类列表查询, 品牌分类列表查询 ,商品分类分组统计实现(封装一个方法返回所有检索条件结果返回)
*
* @param builder
* @return
*/
private Map<String, Object> searchGroupList(NativeSearchQueryBuilder builder, int totalSize) {
// 聚合查询 (分类) 别名 对应kibana中的字段
builder.addAggregation(AggregationBuilders.terms("skuCategpryName").field("categoryName.keyword").size(totalSize));
// 聚合查询 (品牌) 别名 对应kibana中的字段
builder.addAggregation(AggregationBuilders.terms("skuBrandName").field("brandName.keyword").size(totalSize));
// 聚合查询 (品牌) 别名 对应kibana中的字段
builder.addAggregation(AggregationBuilders.terms("skuSpec").field("spec.keyword").size(totalSize));
// 分组结果集
SearchHits<SkuInfo> searchHits = elasticsearchRestTemplate.search(builder.build(), SkuInfo.class);
// 对SearchHits集合进行分页封装
SearchPage<SkuInfo> page = SearchHitSupport.searchPageFor(searchHits, builder.build().getPageable());
//处理结果集
Aggregations aggregations = page.getSearchHits().getAggregations();
// 统计分类
List<String> categoryList = getGroupList(aggregations, "skuCategpryName");
// 统计品牌
List<String> brandList = getGroupList(aggregations, "skuBrandName");
// 统计规格
List<String> spceList = getGroupList(aggregations, "skuSpec");
// 将统计规格的List结果集转成Map返回
Map<String, Set<String>> specmap = pullMap(spceList);
// 将所有的数据封装Map
Map<String, Object> map = new HashMap<>();
map.put("categoryList", categoryList);
map.put("brandList", brandList);
map.put("specMap", specmap);
// 返回最终结果集
return map;
}
/**
* 处理聚合查询(分类,品牌,品牌)结果集
*
* @param
* @return
*/
private List<String> getGroupList(Aggregations aggregations, String groupName) {
Terms terms = aggregations.get(groupName);
List<String> resultList = new ArrayList<>();
if (terms != null) {
for (Terms.Bucket bucket : terms.getBuckets()) {
String keyAsString = bucket.getKeyAsString();// 分组的值(分类名称/品牌名称)
resultList.add(keyAsString);
}
}
return resultList;
}
/**
* 处理规格数据封装Map
*
* @param list
* @return
*/
private Map<String, Set<String>> pullMap(List<String> list) {
Map<String, Set<String>> map = new HashMap<>();
for (String spec : list) {
// 将字符JSON数据转Map
Map<String, String> specMap = JSON.parseObject(spec, Map.class);
// 遍历map
Set<Map.Entry<String, String>> entrySet = specMap.entrySet();
for (Map.Entry<String, String> entry : entrySet) {
// 电视音响效果":
String key = entry.getKey();
// 小影院...
String value = entry.getValue();
// value是多个且不能重复使用Set集合存储
// 首先判断map中是否有set
Set<String> set = map.get(key);
if (set == null) {
// 判断set是否为空,如果是空就new HashSet
set = new HashSet<>();
}
// set不为空就直接往里面添加数据
set.add(value);
map.put(key, set);
}
}
return map;
}
/**
* 根据关键字进行检索
*
* @param builder
* @return
*/
private Map<String, Object> searchForPage(NativeSearchQueryBuilder builder) {
// 关键字的高亮显示
// 继续封装检索条件
HighlightBuilder.Field field = new HighlightBuilder.Field("name"); //sku的name如果有关键字就进行高亮
field.preTags("<font color='red'>"); // 开始标签
field.postTags("</font>"); // 结束标签
field.fragmentSize(100); // 显示的字符个数
builder.withHighlightFields(field);
NativeSearchQuery build = builder.build();
//AggregatedPage<SkuInfo> page = elasticsearchTemplate.queryForPage(build, SkuInfo.class);
// 分组结果集
SearchHits<SkuInfo> searchHits = elasticsearchRestTemplate.search(builder.build(), SkuInfo.class);
// 对SearchHits集合进行分页封装
SearchPage<SkuInfo> page = SearchHitSupport.searchPageFor(searchHits, builder.build().getPageable());
// 取出高亮的结果数据,在该对象中
// 遍历: 对返回的内容进行处理(高亮字段替换原来的字段)
for(SearchHit<SkuInfo> searchHit:searchHits){
// 获取searchHit中的高亮内容
Map<String, List<String>> highlightFields = searchHit.getHighlightFields();
// 将高亮的内容填充到content中
searchHit.getContent().setName(highlightFields.get("name")==null ? searchHit.getContent().getName():highlightFields.get("name").get(0));
}
Map<String, Object> map = new HashMap<>();
// 商品结果集
map.put("rows", page.getContent());
//总条数
map.put("TotalElements", page.getTotalElements());
//总页数
map.put("TotalPages", page.getTotalPages());
// 分页当前页码
map.put("pageNum", build.getPageable().getPageNumber() + 1);
// 每页显示条数
map.put("pageSize", build.getPageable().getPageSize());
return map;
}
/**
* 此方法用于封装检索条件NativeSearchQueryBuilder
*
* @param searchMap
* @return
*/
private NativeSearchQueryBuilder builderBasicQuery(Map<String, String> searchMap) {
// 封装检索条件
NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
// 添加过滤条件
BoolQueryBuilder boolBuilder = new BoolQueryBuilder();
if (searchMap != null) {
// 1.根据关键字检索
String keywords = searchMap.get("keywords");
if (!StringUtils.isEmpty(keywords)) {
builder.withQuery(QueryBuilders.matchPhraseQuery("name", keywords));
}
// 继续拼接条件
// 2.根据商品分类过滤
String category = searchMap.get("category");
if (!StringUtils.isEmpty(category)) {
boolBuilder.must(QueryBuilders.matchPhraseQuery("categoryName", category));
}
// 3.根据商品品牌过滤
String brand = searchMap.get("brand");
if (!StringUtils.isEmpty(brand)) {
boolBuilder.must(QueryBuilders.matchPhraseQuery("brandName", brand));
}
// 4.根据商品规格过滤(选择的规格有多个)
// ::spec_屏幕尺寸 :5.7, spec_内存 :40G
Set<String> keys = searchMap.keySet();
for (String key : keys) {
// 判断规格条件是否是spec_开头的
if (key.startsWith("spec_")) {
String value = searchMap.get(key).replace("\\", "");
boolBuilder.must(QueryBuilders.matchQuery("specMap." + key.substring(5) + ".keyword", value));
}
}
// 5.根据商品价格过滤(区间段)
String price = searchMap.get("price");
if (!StringUtils.isEmpty(price)) {
// 页面传的价格式(min ~ max / >price / <price)
String[] priceArray = price.split("-");
// 如果传的价格参数是一个就大于(>=)查询
boolBuilder.must(QueryBuilders.rangeQuery("price").gte(priceArray[0]));
if (priceArray.length > 1) {
// 如果传的价格参数是两个就小于(<=)查询
boolBuilder.must(QueryBuilders.rangeQuery("price").lte(priceArray[1]));
}
}
// 6.进行排序查询(排序字段,ASC DESC)
// 排序的字段
String sortField = searchMap.get("sortField");
// 排序的规则(ASC DESC)
String sortRule = searchMap.get("sortRule");
if (!StringUtils.isEmpty(sortField)) {
builder.withSort(SortBuilders.fieldSort(sortField).order(SortOrder.valueOf(sortRule)));
}
}
// 7. 将过滤的条件都加到builder中NativeSearchQueryBuilder
builder.withFilter(boolBuilder);
// 8. 添加分页条件 age1:当前页(page) age2:每页显示数据(size)
String page = searchMap.get("pageNum");
if (StringUtils.isEmpty(page)) {
// 默认起始页为第一页
page = "1";
}
int pageNum = Integer.parseInt(page);
// 动态获得前端传
String size = searchMap.get("size");
// 默认一页显示20条数据
if (StringUtils.isEmpty(size)) {
size = "20";
}
int pageSize = Integer.parseInt(size);
Pageable pageable = PageRequest.of(pageNum - 1, pageSize);
builder.withPageable(pageable);
return builder;
}
}
/**
* 商品搜索管理Service实现类
* Created by macro on 2018/6/19.
*/
@Service
public class EsProductServiceImpl implements EsProductService {
private static final Logger LOGGER = LoggerFactory.getLogger(EsProductServiceImpl.class);
@Autowired
private EsProductDao productDao;
@Autowired
private EsProductRepository productRepository;
@Autowired
private ElasticsearchRestTemplate elasticsearchRestTemplate;
@Override
public int importAll() {
List<EsProduct> esProductList = productDao.getAllEsProductList(null);
Iterable<EsProduct> esProductIterable = productRepository.saveAll(esProductList);
Iterator<EsProduct> iterator = esProductIterable.iterator();
int result = 0;
while (iterator.hasNext()) {
result++;
iterator.next();
}
return result;
}
@Override
public void delete(Long id) {
productRepository.deleteById(id);
}
@Override
public EsProduct create(Long id) {
EsProduct result = null;
List<EsProduct> esProductList = productDao.getAllEsProductList(id);
if (esProductList.size() > 0) {
EsProduct esProduct = esProductList.get(0);
result = productRepository.save(esProduct);
}
return result;
}
@Override
public void delete(List<Long> ids) {
if (!CollectionUtils.isEmpty(ids)) {
List<EsProduct> esProductList = new ArrayList<>();
for (Long id : ids) {
EsProduct esProduct = new EsProduct();
esProduct.setId(id);
esProductList.add(esProduct);
}
productRepository.deleteAll(esProductList);
}
}
@Override
public Page<EsProduct> search(String keyword, Integer pageNum, Integer pageSize) {
Pageable pageable = PageRequest.of(pageNum, pageSize);
return productRepository.findByNameOrSubTitleOrKeywords(keyword, keyword, keyword, pageable);
}
@Override
public Page<EsProduct> search(String keyword, Long brandId, Long productCategoryId, Integer pageNum, Integer pageSize,Integer sort) {
Pageable pageable = PageRequest.of(pageNum, pageSize);
NativeSearchQueryBuilder nativeSearchQueryBuilder = new NativeSearchQueryBuilder();
//分页
nativeSearchQueryBuilder.withPageable(pageable);
//过滤
if (brandId != null || productCategoryId != null) {
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
if (brandId != null) {
boolQueryBuilder.must(QueryBuilders.termQuery("brandId", brandId));
}
if (productCategoryId != null) {
boolQueryBuilder.must(QueryBuilders.termQuery("productCategoryId", productCategoryId));
}
nativeSearchQueryBuilder.withFilter(boolQueryBuilder);
}
//搜索
if (StringUtils.isEmpty(keyword)) {
nativeSearchQueryBuilder.withQuery(QueryBuilders.matchAllQuery());
} else {
List<FunctionScoreQueryBuilder.FilterFunctionBuilder> filterFunctionBuilders = new ArrayList<>();
filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.matchQuery("name", keyword),
ScoreFunctionBuilders.weightFactorFunction(10)));
filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.matchQuery("subTitle", keyword),
ScoreFunctionBuilders.weightFactorFunction(5)));
filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.matchQuery("keywords", keyword),
ScoreFunctionBuilders.weightFactorFunction(2)));
FunctionScoreQueryBuilder.FilterFunctionBuilder[] builders = new FunctionScoreQueryBuilder.FilterFunctionBuilder[filterFunctionBuilders.size()];
filterFunctionBuilders.toArray(builders);
FunctionScoreQueryBuilder functionScoreQueryBuilder = QueryBuilders.functionScoreQuery(builders)
.scoreMode(FunctionScoreQuery.ScoreMode.SUM)
.setMinScore(2);
nativeSearchQueryBuilder.withQuery(functionScoreQueryBuilder);
}
//排序
if(sort==1){
//按新品从新到旧
nativeSearchQueryBuilder.withSort(SortBuilders.fieldSort("id").order(SortOrder.DESC));
}else if(sort==2){
//按销量从高到低
nativeSearchQueryBuilder.withSort(SortBuilders.fieldSort("sale").order(SortOrder.DESC));
}else if(sort==3){
//按价格从低到高
nativeSearchQueryBuilder.withSort(SortBuilders.fieldSort("price").order(SortOrder.ASC));
}else if(sort==4){
//按价格从高到低
nativeSearchQueryBuilder.withSort(SortBuilders.fieldSort("price").order(SortOrder.DESC));
}else{
//按相关度
nativeSearchQueryBuilder.withSort(SortBuilders.scoreSort().order(SortOrder.DESC));
}
nativeSearchQueryBuilder.withSort(SortBuilders.scoreSort().order(SortOrder.DESC));
NativeSearchQuery searchQuery = nativeSearchQueryBuilder.build();
LOGGER.info("DSL:{}", searchQuery.getQuery().toString());
return productRepository.search(searchQuery);
}
@Override
public Page<EsProduct> recommend(Long id, Integer pageNum, Integer pageSize) {
Pageable pageable = PageRequest.of(pageNum, pageSize);
List<EsProduct> esProductList = productDao.getAllEsProductList(id);
if (esProductList.size() > 0) {
EsProduct esProduct = esProductList.get(0);
String keyword = esProduct.getName();
Long brandId = esProduct.getBrandId();
Long productCategoryId = esProduct.getProductCategoryId();
//根据商品标题、品牌、分类进行搜索
List<FunctionScoreQueryBuilder.FilterFunctionBuilder> filterFunctionBuilders = new ArrayList<>();
filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.matchQuery("name", keyword),
ScoreFunctionBuilders.weightFactorFunction(8)));
filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.matchQuery("subTitle", keyword),
ScoreFunctionBuilders.weightFactorFunction(2)));
filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.matchQuery("keywords", keyword),
ScoreFunctionBuilders.weightFactorFunction(2)));
filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.matchQuery("brandId", brandId),
ScoreFunctionBuilders.weightFactorFunction(5)));
filterFunctionBuilders.add(new FunctionScoreQueryBuilder.FilterFunctionBuilder(QueryBuilders.matchQuery("productCategoryId", productCategoryId),
ScoreFunctionBuilders.weightFactorFunction(3)));
FunctionScoreQueryBuilder.FilterFunctionBuilder[] builders = new FunctionScoreQueryBuilder.FilterFunctionBuilder[filterFunctionBuilders.size()];
filterFunctionBuilders.toArray(builders);
FunctionScoreQueryBuilder functionScoreQueryBuilder = QueryBuilders.functionScoreQuery(builders)
.scoreMode(FunctionScoreQuery.ScoreMode.SUM)
.setMinScore(2);
//用于过滤掉相同的商品
BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();
boolQueryBuilder.mustNot(QueryBuilders.termQuery("id",id));
//构建查询条件
NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
builder.withQuery(functionScoreQueryBuilder);
builder.withFilter(boolQueryBuilder);
builder.withPageable(pageable);
NativeSearchQuery searchQuery = builder.build();
LOGGER.info("DSL:{}", searchQuery.getQuery().toString());
return productRepository.search(searchQuery);
}
return new PageImpl<>(null);
}
@Override
public EsProductRelatedInfo searchRelatedInfo(String keyword) {
NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder();
//搜索条件
if(StringUtils.isEmpty(keyword)){
builder.withQuery(QueryBuilders.matchAllQuery());
}else{
builder.withQuery(QueryBuilders.multiMatchQuery(keyword,"name","subTitle","keywords"));
}
//聚合搜索品牌名称
builder.addAggregation(AggregationBuilders.terms("brandNames").field("brandName"));
//集合搜索分类名称
builder.addAggregation(AggregationBuilders.terms("productCategoryNames").field("productCategoryName"));
//聚合搜索商品属性,去除type=1的属性
AbstractAggregationBuilder aggregationBuilder = AggregationBuilders.nested("allAttrValues","attrValueList")
.subAggregation(AggregationBuilders.filter("productAttrs",QueryBuilders.termQuery("attrValueList.type",1))
.subAggregation(AggregationBuilders.terms("attrIds")
.field("attrValueList.productAttributeId")
.subAggregation(AggregationBuilders.terms("attrValues")
.field("attrValueList.value"))
.subAggregation(AggregationBuilders.terms("attrNames")
.field("attrValueList.name"))));
builder.addAggregation(aggregationBuilder);
NativeSearchQuery searchQuery = builder.build();
SearchHits<EsProduct> searchHits = elasticsearchRestTemplate.search(searchQuery, EsProduct.class);
return convertProductRelatedInfo(searchHits);
}
/**
* 将返回结果转换为对象
*/
private EsProductRelatedInfo convertProductRelatedInfo(SearchHits<EsProduct> response) {
EsProductRelatedInfo productRelatedInfo = new EsProductRelatedInfo();
Map<String, Aggregation> aggregationMap = response.getAggregations().getAsMap();
//设置品牌
Aggregation brandNames = aggregationMap.get("brandNames");
List<String> brandNameList = new ArrayList<>();
for(int i = 0; i<((Terms) brandNames).getBuckets().size(); i++){
brandNameList.add(((Terms) brandNames).getBuckets().get(i).getKeyAsString());
}
productRelatedInfo.setBrandNames(brandNameList);
//设置分类
Aggregation productCategoryNames = aggregationMap.get("productCategoryNames");
List<String> productCategoryNameList = new ArrayList<>();
for(int i=0;i<((Terms) productCategoryNames).getBuckets().size();i++){
productCategoryNameList.add(((Terms) productCategoryNames).getBuckets().get(i).getKeyAsString());
}
productRelatedInfo.setProductCategoryNames(productCategoryNameList);
//设置参数
Aggregation productAttrs = aggregationMap.get("allAttrValues");
List<? extends Terms.Bucket> attrIds = ((ParsedLongTerms) ((ParsedFilter) ((ParsedNested) productAttrs).getAggregations().get("productAttrs")).getAggregations().get("attrIds")).getBuckets();
List<EsProductRelatedInfo.ProductAttr> attrList = new ArrayList<>();
for (Terms.Bucket attrId : attrIds) {
EsProductRelatedInfo.ProductAttr attr = new EsProductRelatedInfo.ProductAttr();
attr.setAttrId((Long) attrId.getKey());
List<String> attrValueList = new ArrayList<>();
List<? extends Terms.Bucket> attrValues = ((ParsedStringTerms) attrId.getAggregations().get("attrValues")).getBuckets();
List<? extends Terms.Bucket> attrNames = ((ParsedStringTerms) attrId.getAggregations().get("attrNames")).getBuckets();
for (Terms.Bucket attrValue : attrValues) {
attrValueList.add(attrValue.getKeyAsString());
}
attr.setAttrValues(attrValueList);
if(!CollectionUtils.isEmpty(attrNames)){
String attrName = attrNames.get(0).getKeyAsString();
attr.setAttrName(attrName);
}
attrList.add(attr);
}
productRelatedInfo.setProductAttrs(attrList);
return productRelatedInfo;
}
}
query时默认上限是10000条 向ES服务器put一条设置即可修改该限制
PUT /索引名/_settings?preserve_existing=true
{“index.max_result_window”:“2000000000”}
使用ElasticsearchRepository或者ElasticsearchRestTemplate查询分页时,返回的total数也会限制10000,导致分页功能页面仅显示10000条。
从AbstractQuery的源码得知里面有提供字段trackTotalHits用于查询总数,但是builder类NativeSearchQueryBuilder中没有提供设值方法 我们可以在此基础上自定义一个QueryBuilder来设值进去即可。
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.search.sort.SortBuilder;
import org.springframework.data.domain.Pageable;
import org.springframework.data.elasticsearch.core.query.NativeSearchQuery;
import org.springframework.data.elasticsearch.core.query.NativeSearchQueryBuilder;
/**
* 自定义queryBuilder 在原基础上添加了trackTotalHits的配置
*/
public class NativeSearchQueryBuilderCustom extends NativeSearchQueryBuilder{
private Boolean trackTotalHits;
public NativeSearchQueryBuilderCustom() {
}
public NativeSearchQueryBuilderCustom trackTotalHits(boolean trackTotalHits) {
this.trackTotalHits = trackTotalHits;
return this;
}
@Override
public NativeSearchQueryBuilderCustom withFields(String... fields) {
super.withFields(fields);
return this;
}
@Override
public NativeSearchQueryBuilderCustom withQuery(QueryBuilder queryBuilder) {
super.withQuery(queryBuilder);
return this;
}
@Override
public NativeSearchQueryBuilderCustom withSort(SortBuilder sortBuilder) {
super.withSort(sortBuilder);
return this;
}
@Override
public NativeSearchQueryBuilderCustom withPageable(Pageable pageable) {
super.withPageable(pageable);
return this;
}
@Override
public NativeSearchQuery build() {
NativeSearchQuery build = super.build();
if(this.trackTotalHits != null){
build.setTrackTotalHits(this.trackTotalHits);
}
return build;
}
}
NativeSearchQueryBuilderCustom queryBuilder = new NativeSearchQueryBuilderCustom()
.withPageable(PageRequest.of(query.getCurrent(), query.getSize())) // 分页
.withSort(SortBuilders.fieldSort("createTime").order(SortOrder.DESC)) // 排序
.withFields(fixedEsField) // 过滤字段
.trackTotalHits(true); // 查询所有数
repository.search(queryBuilder.build());
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。