当前位置:   article > 正文

全文检索-ElasticSearch

全文检索-ElasticSearch

1.基本概念

1.Index索引

动词:相当于MySQL中的insert;

名词:相当于MySQL中的DataBase

2.Type(类型)

在Index(索引)中,可以定义一个或多个类型

类似于MySQL中的Table;每一种类型的数据放在一起

3.Document(文档)

保存在某个索引(index)下,某种类型(Type) 的一个数据(Document),文档是JSON格式的,Document就像是MySQL 中的某个Table里面的内容 类似一行数据

4.倒排索引

2.Docker 安装ElasticSearch

2.1 拉取镜像

docker pull elasticsearch:7.4.2
docker pull kibana:7.4.2

2.2 创建实例

2.2.1 创建挂载目录

mkdir  ./config
mkdir ./data

 记得授予权限

chmod -R 777 ./elasticsearch

2.2.2 使容器外任何地址都能够访问 elasticsearch

echo "http.host: 0.0.0.0">>./config/elasticsearch.yml

elasticsearch.yml

http.host: 0.0.0.0

2.2.3 docker 启动

  1. docker run --name elasticsearch -p 9200:9200 -p9300:9300 \
  2. -e "discovery.type=single-node" \
  3. -e ES_JAVA_OPTS="-Xms512m -Xmx1024m" \
  4. -v ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
  5. -v ./data:/usr/share/elasticsearch/data \
  6. -v ./plugins:/usr/share/elasticsearch/plugins \
  7. -d elasticsearch:7.4.2

2.3 安装Kibana
  1. docker run --name kibana -e ELASTICSEARCH_HOSTS=http://192.168.232.209:9200 -p 5601:5601 \
  2. -d kibana:7.4.2

 3.初步检索

3.1 _cat 

查看节点信息

http://192.168.232.209:9200/_cat/nodes

查看elasticsearch的健康状态

http://192.168.232.209:9200/_cat/health

查看elasticsearch的主节点信息

http://192.168.232.209:9200/_cat/master

查看所有索引

http://192.168.232.209:9200/_cat/indices

3.2 索引一个文档(保存或修改一条记录)

保存一个数据,保存在那个索引的哪个类型下,指定用哪个唯一标识

http://192.168.232.209:9200/customer/external/1

 3.3 查询文档 

http://192.168.232.209:9200/customer/external/1

3.4 更新文档 

3.4.1 _update

这个操作如果修改文档的值和原来一样,则不会更新版本。

3.4.2 

3.5 删除文档

3.6 bulk 批量 API

批量操作 

从这个网站复制

https://gitee.com/xlh_blog/common_content/blob/master/es%E6%B5%8B%E8%AF%95%E6%95%B0%E6%8D%AE.json#

执行 /_bluk 

 4.进阶检索

1.searchAPI

ES支持两种基本方式检索:

  • 一个是通过使用REST request URI 发送搜索参数(uri + 检索参数)
  • 另一个是通过使用REST request body 来发送它们 (uri + 请求体)
GET /bank/_search?q=*&sort=account_number:asc

q=* 查询所有

sort 跟据 account_number 升序

2.QueryDSL

  1. GET /bank/_search
  2. {
  3. "query": {
  4. "match_all": {}
  5. },
  6. "sort": [
  7. {
  8. "account_number": "asc"
  9. },
  10. {
  11. "balance": "desc"
  12. }
  13. ]
  14. }

3.部分检索

  1. GET /bank/_search
  2. {
  3. "query": {
  4. "match_all": {}
  5. },
  6. "sort": [
  7. {
  8. "account_number": "desc"
  9. },
  10. {
  11. "balance": "desc"
  12. }
  13. ],
  14. "from": 0,
  15. "size": 20,
  16. "_source": ["balance","account_number"]
  17. }

4. match[匹配查询]

  1. GET /bank/_search
  2. {
  3. "query": {
  4. "match": {
  5. "account_number": 20
  6. }
  7. }
  8. }
  1. GET /bank/_search
  2. {
  3. "query": {
  4. "match": {
  5. "address": "mill lane"
  6. }
  7. }
  8. }

全文检索按照评分进行排序

5.match_phrase [短语匹配]

将需要匹配的值当成一个整体单词(不分词)进行检索

  1. GET /bank/_search
  2. {
  3. "query": {
  4. "match_phrase": {
  5. "address": "mill lane"
  6. }
  7. }
  8. }

6.multi_match [多字段匹配]

这是或,只要一个字段满足,就返回

  1. GET /bank/_search
  2. {
  3. "query": {
  4. "multi_match": {
  5. "query": "mill",
  6. "fields": ["state","address"]
  7. }
  8. }
  9. }

能够正常分词 

  1. GET /bank/_search
  2. {
  3. "query": {
  4. "multi_match": {
  5. "query": "mill Movico",
  6. "fields": ["city","address"]
  7. }
  8. }
  9. }

7.bool复杂查询

bool用来做复杂查询:

复合语句可以合并 任何 其他查询语句,包括复合语句,了解这一点是很重要的。这就意味着,复合语句之间可以相互嵌套,可以表达非常复杂的逻辑。

must: 必须达到must列举所有条件 也就是相当于 AND

must_not: 且不满足里面的条件

should: 不是or 就是匹配上面有加分项

  1. GET /bank/_search
  2. {
  3. "query": {
  4. "bool": {
  5. "must": [
  6. {
  7. "match": {
  8. "gender": "m"
  9. }
  10. },
  11. {
  12. "match": {
  13. "address": "Mill"
  14. }
  15. }
  16. ],
  17. "must_not": [
  18. {
  19. "match": {
  20. "age": 28
  21. }
  22. }
  23. ],
  24. "should": [
  25. {
  26. "match": {
  27. "lastname": "v"
  28. }
  29. }
  30. ]
  31. }
  32. }
  33. }

8.filter [结果过滤]

并不是所有的查询都需要产生分数,特别是那些仅用于 "filtering" (过滤) 的文档。为了不计算分数Elasticsearch 会自动检查场景并且优化查询的执行。

  1. GET /bank/_search
  2. {
  3. "query": {
  4. "bool": {
  5. "must": [
  6. {
  7. "match": {
  8. "gender": "m"
  9. }
  10. },
  11. {
  12. "match": {
  13. "address": "Mill"
  14. }
  15. }
  16. ],
  17. "must_not": [
  18. {
  19. "match": {
  20. "age": 18
  21. }
  22. }
  23. ],
  24. "should": [
  25. {
  26. "match": {
  27. "lastname": "Wallace"
  28. }
  29. }
  30. ],
  31. "filter": {
  32. "range": {
  33. "age": {
  34. "gte": 18,
  35. "lte": 20
  36. }
  37. }
  38. }
  39. }
  40. }
  41. }

9.term

和match一样。匹配某个属性的值。全文检索字段用match,其他非text 字段匹配用term

不用全文检索的时候用term 比如数字 年龄

  1. GET /bank/_search
  2. {
  3. "query": {
  4. "term": {
  5. "age": {
  6. "value": "28"
  7. }
  8. }
  9. }
  10. }
  1. GET /bank/_search
  2. {
  3. "query": {
  4. "match": {
  5. "email.keyword": "margueritewall@aquoavo.com"
  6. }
  7. }
  8. }

address.keyword 和 match_phrase 区别:

前者 就是精确匹配 ,后者包含这个短语 就行

非文本字段 用 term

文本字段用 match

10. aggregations (执行聚合)

聚合提供了从数据中分组和提取数据的能力。最简单的聚合方法大致等于 SQL GROUP BY 和 SQL 的聚合函数 。在Elasticsearch 中, 您有执行搜索返回 hits (命中结果) ,并且同时返回聚合结果,把一个响应中的所有hits (命中结果) 分隔开的能力 。 这是非常强大且有效,您可以执行查询和多个聚合,并且在一次使用中得到各自 的(任何一个的) 返回结果,使用一次简化和简化的API 来避免网络往返。

搜索 address 中包含mill 的所有人的年龄分布以及平均年龄,但不显示这些人的详情。

  1. GET /bank/_search
  2. {
  3. "query": {
  4. "match": {
  5. "address": "mill"
  6. }
  7. },
  8. "aggs": {
  9. "ageAgg": {
  10. "terms": {
  11. "field": "age",
  12. "size": 10
  13. }
  14. },
  15. "ageAvg":{
  16. "avg": {
  17. "field": "age"
  18. }
  19. },
  20. "blanceAvg":{
  21. "avg": {
  22. "field": "balance"
  23. }
  24. }
  25. },
  26. "size": 0
  27. }

复杂:

按照年龄聚合,并且请求这些年龄段的这些人的平均薪资

  1. ##按照年龄聚合,并且请求这些年龄段的这些人的平均薪资
  2. GET /bank/_search
  3. {
  4. "query": {
  5. "match_all": {}
  6. },
  7. "aggs": {
  8. "aggAgg": {
  9. "terms": {
  10. "field": "age",
  11. "size": 100
  12. },
  13. "aggs": {
  14. "aggAvg": {
  15. "avg": {
  16. "field": "balance"
  17. }
  18. }
  19. }
  20. }
  21. }
  22. }

复杂2:

查出所有年龄分布,并且这些年龄段中M的平均薪资和F的平均薪资以及这个年龄段的总体平均薪资.

  1. ##查出所有年龄分布,并且这些年龄段中M的平均薪资和F的平均薪资以及这个年龄段的总体平均薪资
  2. GET /bank/_search
  3. {
  4. "query": {
  5. "match_all": {}
  6. },
  7. "aggs": {
  8. "aggAggs": {
  9. "terms": {
  10. "field": "age",
  11. "size": 100
  12. }
  13. ,
  14. "aggs": {
  15. "avgBalanceAll":{
  16. "avg": {
  17. "field": "balance"
  18. }
  19. }
  20. ,
  21. "genderAgg": {
  22. "terms": {
  23. "field": "gender.keyword",
  24. "size": 2
  25. },
  26. "aggs": {
  27. "avgBlance": {
  28. "avg": {
  29. "field": "balance"
  30. }
  31. }
  32. }
  33. }
  34. }
  35. }
  36. }
  37. }

11.mapping(映射)

所有数据类型

创建一个有类型定义的索引

  1. PUT /my_index
  2. {
  3. "mappings": {
  4. "properties": {
  5. "age":{"type": "integer" },
  6. "email":{"type": "keyword"},
  7. "name":{"type": "text"}
  8. }
  9. }
  10. }

 添加映射字段

  1. PUT /my_index/_mapping
  2. {
  3. "properties": {
  4. "employee-id":{
  5. "type":"keyword",
  6. "index":false
  7. }
  8. }
  9. }

index =false 代表不参与索引,是搜索不到他的,相当于冗余存储字段,通过其他字段查出来

迁移数据

创建新索引

  1. PUT /newbank
  2. {
  3. "mappings": {
  4. "properties": {
  5. "account_number": {
  6. "type": "long"
  7. },
  8. "address": {
  9. "type": "text"
  10. },
  11. "age": {
  12. "type": "integer"
  13. },
  14. "balance": {
  15. "type": "long"
  16. },
  17. "city": {
  18. "type": "keyword"
  19. },
  20. "email": {
  21. "type": "keyword"
  22. },
  23. "employer": {
  24. "type": "text",
  25. "fields": {
  26. "keyword": {
  27. "type": "keyword",
  28. "ignore_above": 256
  29. }
  30. }
  31. },
  32. "firstname": {
  33. "type": "text"
  34. },
  35. "gender": {
  36. "type": "keyword"
  37. },
  38. "lastname": {
  39. "type": "text",
  40. "fields": {
  41. "keyword": {
  42. "type": "keyword",
  43. "ignore_above": 256
  44. }
  45. }
  46. },
  47. "state": {
  48. "type": "keyword"
  49. }
  50. }
  51. }
  52. }

上面是6.0以后不用类型保存的迁移方法

下面是6.0之前

 

  1. POST _reindex
  2. {
  3. "source": {
  4. "index": "bank",
  5. "type": "account"
  6. },
  7. "dest": {
  8. "index": "newbank"
  9. }
  10. }

 5.分词

  1. POST _analyze
  2. {
  3. "analyzer": "standard",
  4. "text": "The 2 QUICK Brown_Foxes jumped over the lazy dog's bone."
  5. }

 1.安装ik分词器

注意:不能用默认elastics-plugin install xx.zip 进行自动安装

进入这个网址下

Index of: analysis-ik/stable/ (infinilabs.com)

进入es 容器·内部 plugins 目录

docker exec -it 容器id /bin/bash

  1. POST _analyze
  2. {
  3. "analyzer": "ik_smart",
  4. "text": "我是中国人"
  5. }
  6. POST _analyze
  7. {
  8. "analyzer": "ik_max_word",
  9. "text": "鸡你太美"
  10. }

 安装方法和我上一篇文章一样

ElasticSearch-CSDN博客

vagrant ssh密码登录  122集

2.自定义分词器

1.重新安装nginx

命令

在nginx文件夹下,执行

  1. docker run -p 80:80 --name nginx \
  2. -v ./html:/usr/share/nginx/html \
  3. -v ./logs:/var/log/nginx \
  4. -v ./conf:/etc/nginx \
  5. -d nginx:1.10

 2. 创建分词文件

/opt/nginx/html/es/fenci.txt

  1. 尚硅谷
  2. 乔碧螺

3.在es插件,路径下找到xml文件对应的分词库路径,保存位置进行修改

"/opt/elasticearch/plugins/ik/config/IKAnalyzer.cfg.xml"

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
  3. <properties>
  4. <comment>IK Analyzer 扩展配置</comment>
  5. <!--用户可以在这里配置自己的扩展字典 -->
  6. <entry key="ext_dict"></entry>
  7. <!--用户可以在这里配置自己的扩展停止词字典-->
  8. <entry key="ext_stopwords"></entry>
  9. <!--用户可以在这里配置远程扩展字典 -->
  10. <entry key="remote_ext_dict">http://虚拟机地址:80/es/fenci.txt</entry>
  11. <!--用户可以在这里配置远程扩展停止词字典-->
  12. <!-- <entry key="remote_ext_stopwords">words_location</entry> -->
  13. </properties>

4.修改以后重启restart es容器

docker restart elasticsearch

6.Elasticsearch整合Spirngboot使用

1.Elasticsearch-Rest-Client 官方 RestClient ,封装类ES操作,API层次分明,上手简单。

最终选择Elasticsearch-Rest-Client (elasticsearch-rest-high-level-client)

https://www.elastic.co/guid/en/elasticsearch/client/java-rest/current/java-rest-high.html
  1. <!-- 导入ES高阶API-->
  2. <dependency>
  3. <groupId>org.elasticsearch.client</groupId>
  4. <artifactId>elasticsearch-rest-high-level-client</artifactId>
  5. <version>${elasticsearch.version}</version>
  6. </dependency>
  1. package com.jmj.gulimall.search.config;
  2. import org.apache.http.HttpHost;
  3. import org.elasticsearch.client.RestClient;
  4. import org.elasticsearch.client.RestHighLevelClient;
  5. import org.springframework.context.annotation.Bean;
  6. import org.springframework.context.annotation.Configuration;
  7. /**
  8. * 导入依赖
  9. * 编写配置 给容器中注入一个 RestHighLevelClient
  10. * 参照官方API 操作就可以了 https://www.elastic.co/guide/en/elasticsearch/client/java-rest/7.4/java-rest-high-getting-started-initialization.html
  11. */
  12. @Configuration
  13. public class GulimallElasticSearchConfig {
  14. @Bean
  15. public RestHighLevelClient esRestClient() {
  16. RestHighLevelClient client = new RestHighLevelClient(
  17. RestClient.builder(
  18. new HttpHost("192.168.232.209", 9200, "http")));
  19. return client;
  20. }
  21. }

2.RequestOption

请求选项:比如安全验证,带token 请求头

  1. package com.jmj.gulimall.search.config;
  2. import org.apache.http.HttpHost;
  3. import org.elasticsearch.client.RequestOptions;
  4. import org.elasticsearch.client.RestClient;
  5. import org.elasticsearch.client.RestHighLevelClient;
  6. import org.springframework.context.annotation.Bean;
  7. import org.springframework.context.annotation.Configuration;
  8. /**
  9. * 导入依赖
  10. * 编写配置 给容器中注入一个 RestHighLevelClient
  11. * 参照官方API 操作就可以了 https://www.elastic.co/guide/en/elasticsearch/client/java-rest/7.4/java-rest-high-getting-started-initialization.html
  12. */
  13. @Configuration
  14. public class GulimallElasticSearchConfig {
  15. public static final RequestOptions COMMON_OPTIONS;
  16. static {
  17. RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder();
  18. // builder.addHeader("Authorization", "Bearer " + TOKEN);
  19. // builder.setHttpAsyncResponseConsumerFactory(
  20. // new HttpAsyncResponseConsumerFactory
  21. // .HeapBufferedResponseConsumerFactory(30 * 1024 * 1024 * 1024));
  22. COMMON_OPTIONS = builder.build();
  23. }
  24. @Bean
  25. public RestHighLevelClient esRestClient() {
  26. RestHighLevelClient client = new RestHighLevelClient(
  27. RestClient.builder(
  28. new HttpHost("192.168.232.209", 9200, "http")));
  29. return client;
  30. }
  31. }

3.Index API

第一种

 第二种

第三种

第四种

  1. /**
  2. * 测试存储数据到ES
  3. * 更新也可以
  4. */
  5. @Test
  6. void indexData() throws IOException {
  7. //index索引 users
  8. IndexRequest indexRequest = new IndexRequest("users");
  9. //设置document id ,不设置就会默认生成
  10. /**
  11. * 若是同样的id重复执行,就是更新操作 乐观锁控制版本
  12. */
  13. indexRequest.id("1");
  14. //1. key value pair
  15. // indexRequest.source("userName","zhangsan","age",18,"gender","男");
  16. //2,JSON
  17. User user = new User("zhangsan", "男", 18);
  18. String json = new ObjectMapper().writeValueAsString(user);
  19. //一秒超时时间
  20. indexRequest.timeout(TimeValue.timeValueSeconds(1));
  21. indexRequest.source(json, XContentType.JSON);//要保存的内容
  22. //执行操作
  23. IndexResponse index = client.index(indexRequest, GulimallElasticSearchConfig.COMMON_OPTIONS);
  24. //提取有用的响应数据
  25. System.out.println(index);
  26. }

4.查询API

  1. @Data
  2. public static class Account{
  3. private int account_number;
  4. private String firstname;
  5. private String address;
  6. private int balance;
  7. private String gender;
  8. private String city;
  9. private String employer;
  10. private String state;
  11. private int age;
  12. private String email;
  13. private String lastname;
  14. }
  15. /**
  16. * search检索
  17. */
  18. @Test
  19. void searchData() throws IOException {
  20. //1、创建检索请求
  21. SearchRequest searchRequest = new SearchRequest();
  22. //2、指定索引
  23. searchRequest.indices("bank");
  24. //3、检索条件DSL
  25. SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
  26. // sourceBuilder.query();
  27. // sourceBuilder.from();
  28. // sourceBuilder.size();
  29. // sourceBuilder.aggregations();
  30. // sourceBuilder.query(QueryBuilders.matchAllQuery());
  31. sourceBuilder.query(QueryBuilders.matchQuery("address","mill"));
  32. //按照年龄进行分组
  33. TermsAggregationBuilder ageAgg = AggregationBuilders.terms("ageAgg").field("age").size(10);
  34. sourceBuilder.aggregation(ageAgg);
  35. //计算平均薪资
  36. AvgAggregationBuilder balanceAge = AggregationBuilders.avg("balanceAvg").field("balance");
  37. sourceBuilder.aggregation(balanceAge);
  38. System.out.println("检索条件:"+sourceBuilder);
  39. searchRequest.source(sourceBuilder);
  40. //4、执行检索
  41. SearchResponse response = client.search(searchRequest, GulimallElasticSearchConfig.COMMON_OPTIONS);
  42. //5、响应 分析结果
  43. // System.out.println(response.toString());
  44. SearchHits hits = response.getHits();
  45. SearchHit[] hits1 = hits.getHits();
  46. for (SearchHit documentFields : hits1) {
  47. String sourceAsString = documentFields.getSourceAsString();
  48. Account account = new ObjectMapper().readValue(sourceAsString, Account.class);
  49. System.out.println(account);
  50. }
  51. //获取分析数据
  52. Aggregations aggregations = response.getAggregations();
  53. Terms ageAgg1 = aggregations.get("ageAgg");
  54. for (Terms.Bucket bucket : ageAgg1.getBuckets()) {
  55. String keyAsString = bucket.getKeyAsString();
  56. System.out.println("年龄:"+keyAsString+"=>"+bucket.getDocCount());
  57. }
  58. Avg balanceAvg = aggregations.get("balanceAvg");
  59. System.out.println("平均薪资:"+balanceAvg.getValue());
  60. }

7.SKU 在es种存储的模型 

其中,库存信息的标题使用了ik分词器,图片信息,品牌名,品牌id等信息均不可检索。商品的规格参数等信息以nested类型,即嵌入属性存储。相关的细节这里不再赘述。

  1. PUT product
  2. {
  3. "mappings": {
  4. "properties": {
  5. "skuId": {
  6. "type": "long"
  7. },
  8. "spuId": {
  9. "type": "long"
  10. },
  11. "skuTitle": {
  12. "type": "text",
  13. "analyzer": "ik_smart"
  14. },
  15. "skuPrice": {
  16. "type": "keyword"
  17. },
  18. "skuImg": {
  19. "type": "keyword",
  20. "index": false,
  21. "doc_values": false
  22. },
  23. "saleCount": {
  24. "type": "long"
  25. },
  26. "hosStock": {
  27. "type": "boolean"
  28. },
  29. "hotScore": {
  30. "type": "long"
  31. },
  32. "brandId": {
  33. "type": "long"
  34. },
  35. "catalogId": {
  36. "type": "long"
  37. },
  38. "brandName": {
  39. "type": "keyword",
  40. "index": false,
  41. "doc_values": false
  42. },
  43. "brandImg": {
  44. "type": "keyword",
  45. "index": false,
  46. "doc_values": false
  47. },
  48. "catalogName": {
  49. "type": "keyword",
  50. "index": false,
  51. "doc_values": false
  52. },
  53. "attrs": {
  54. "type": "nested",
  55. "properties": {
  56. "attrId": {
  57. "type": "long"
  58. },
  59. "attrName": {
  60. "type": "keyword",
  61. "index": false,
  62. "doc_values": false
  63. },
  64. "attrValue": {
  65. "type": "keyword"
  66. }
  67. }
  68. }
  69. }
  70. }
  71. }

8.ES扁平化处理

  1. PUT my_index/_doc/1
  2. {
  3. "group":"fans",
  4. "user":[
  5. {
  6. "first":"John",
  7. "last":"Smith"
  8. },
  9. {
  10. "first":"Alice",
  11. "last":"White"
  12. }
  13. ]
  14. }

  1. GET my_index/_search
  2. {
  3. "query": {
  4. "bool": {
  5. "must": [
  6. {
  7. "match": {
  8. "user.first": "Alice"
  9. }
  10. },
  11. {
  12. "match": {
  13. "user.first": "Alice"
  14. }
  15. }
  16. ]
  17. }
  18. }
  19. }

 取消扁平化处理 

  1. PUT my_index
  2. {
  3. "mappings": {
  4. "properties": {
  5. "user":{
  6. "type": "nested"
  7. }
  8. }
  9. }
  10. }

再次查询

9. 商城上架

  1. @Override
  2. @Transactional(rollbackFor = Exception.class)
  3. public void up(Long spuId) {
  4. //组装需要的数据
  5. //1. 查出当前 spuid 对应的所有sku 信息,品牌 的名字。
  6. List<SkuInfoEntity> skuInfoEntityList = skuInfoService.getSkusBySpuId(spuId);
  7. //TODO 查询当前sku的所有可以用来检索的属性
  8. List<ProductAttrValueEntity> baseAttrs = productAttrValueService.baseAttrlistforspu(spuId);
  9. List<Long> attrIds = baseAttrs.stream().map(a -> a.getAttrId()).collect(Collectors.toList());
  10. List<Long> searchAttrIds = attrService.selectSearchAtts(attrIds);
  11. List<SkuEsModel.Attrs> attrsList = baseAttrs.stream().filter(item -> searchAttrIds.contains(item.getAttrId()))
  12. .map(item -> {
  13. SkuEsModel.Attrs attrs1 = new SkuEsModel.Attrs();
  14. BeanUtils.copyProperties(item, attrs1);
  15. return attrs1;
  16. })
  17. .collect(Collectors.toList());
  18. //TODO 发送远程调用 库存系统查询是否有库存
  19. List<Long> skuIds = skuInfoEntityList.stream().map(s -> s.getSkuId()).distinct().collect(Collectors.toList());
  20. List<SkuHasStockVo> skusHasStock = wareFeignService.getSkusHasStock(skuIds);
  21. Map<Long, Boolean> stockMap = skusHasStock.stream().collect(Collectors.toMap(s -> s.getSkuId(), s -> s.getHasStock()));
  22. //2.封装每个SKU 的信息
  23. List<SkuEsModel> upProducts = skuInfoEntityList.stream().map(sku -> {
  24. SkuEsModel esModel = new SkuEsModel();
  25. BeanUtils.copyProperties(sku, esModel);
  26. esModel.setSkuPrice(sku.getPrice());
  27. esModel.setSkuImg(sku.getSkuDefaultImg());
  28. Long skuId = esModel.getSkuId();
  29. Boolean aBoolean = stockMap.get(skuId);
  30. if (aBoolean!=null){
  31. esModel.setHasStock(aBoolean);
  32. }else {
  33. esModel.setHasStock(false);
  34. }
  35. //TODO 热度评分
  36. esModel.setHotScore(0L);
  37. //TODO 查询品牌和分类的名字信息
  38. BrandEntity brand = brandService.getById(esModel.getBrandId());
  39. esModel.setBrandName(brand.getName());
  40. esModel.setBrandImg(brand.getLogo());
  41. CategoryEntity category = categoryService.getById(esModel.getCatalogId());
  42. esModel.setCatalogName(category.getName());
  43. //设置检索属性
  44. esModel.setAttrs(attrsList);
  45. return esModel;
  46. }).collect(Collectors.toList());
  47. //TODO 将数据发送给es进行保存
  48. searchFeignService.productStatusUp(upProducts);
  49. //TODO 修改状态
  50. this.update(new UpdateWrapper<SpuInfoEntity>()
  51. .set("publish_status", ProductConstant.StatusEnum.SPU_UP.getCode())
  52. .set("update_taime",new Date())
  53. .eq("id",spuId));
  54. //Feign调用流程
  55. /**
  56. * 1、构造请求数据,将对象转为json
  57. * 2、发送请求进行执行(执行成功会解码响应数据)
  58. * 3、执行请求会有重试机制
  59. * //默认重试机制是关闭状态
  60. * while(true){
  61. *
  62. * }
  63. */
  64. }
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/菜鸟追梦旅行/article/detail/691418
推荐阅读
相关标签
  

闽ICP备14008679号