当前位置:   article > 正文

黑马头条详解(四)

黑马头条

七、app端文章搜索

 

7.1、搭建ElasticSearch环境

①拉取镜像

docker pull elasticsearch:7.4.0

②创建容器

docker run -id --name elasticsearch -d --restart=always -p 9200:9200 -p 9300:9300 -v /usr/share/elasticsearch/plugins:/usr/share/elasticsearch/plugins -e "discovery.type=single-node" elasticsearch:7.4.0

③配置中文分词器 ik

因为在创建elasticsearch容器的时候,映射了目录,所以可以在宿主机上进行配置ik中文分词器

在去选择ik分词器的时候,需要与elasticsearch的版本好对应上

把资料中的elasticsearch-analysis-ik-7.4.0.zip上传到服务器上,放到对应目录(plugins)解压

  1. #切换目录
  2. cd /usr/share/elasticsearch/plugins
  3. #新建目录
  4. mkdir analysis-ik
  5. cd analysis-ik
  6. #root根目录中拷贝文件
  7. mv elasticsearch-analysis-ik-7.4.0.zip /usr/share/elasticsearch/plugins/analysis-ik
  8. #解压文件
  9. cd /usr/share/elasticsearch/plugins/analysis-ik
  10. unzip elasticsearch-analysis-ik-7.4.0.zip

postman测试

7.2、需求分析

 

实现思路

使用postman添加映射

put请求 : http://192.168.200.130:9200/app_info_article  

  1. {
  2. "mappings":{
  3. "properties":{
  4. "id":{
  5. "type":"long"
  6. },
  7. "publishTime":{
  8. "type":"date"
  9. },
  10. "layout":{
  11. "type":"integer"
  12. },
  13. "images":{
  14. "type":"keyword",
  15. "index": false
  16. },
  17. "staticUrl":{
  18. "type":"keyword",
  19. "index": false
  20. },
  21. "authorId": {
  22. "type": "long"
  23. },
  24. "authorName": {
  25. "type": "text"
  26. },
  27. "title":{
  28. "type":"text",
  29. "analyzer":"ik_smart"
  30. },
  31. "content":{
  32. "type":"text",
  33. "analyzer":"ik_smart"
  34. }
  35. }
  36. }
  37. }

GET请求查询映射:http://192.168.200.130:9200/app_info_article

DELETE请求,删除索引及映射:http://192.168.200.130:9200/app_info_article

GET请求,查询所有文档:http://192.168.200.130:9200/app_info_article/_search

7.3、数据初始化到索引库

导入es-init到heima-leadnews-test工程下

 查询所有的文章信息,批量导入到es索引库中

  1. package com.heima.es;
  2. import com.alibaba.fastjson.JSON;
  3. import com.heima.es.mapper.ApArticleMapper;
  4. import com.heima.es.pojo.SearchArticleVo;
  5. import org.elasticsearch.action.bulk.BulkRequest;
  6. import org.elasticsearch.action.index.IndexRequest;
  7. import org.elasticsearch.client.RequestOptions;
  8. import org.elasticsearch.client.RestHighLevelClient;
  9. import org.elasticsearch.common.xcontent.XContentType;
  10. import org.junit.Test;
  11. import org.junit.runner.RunWith;
  12. import org.springframework.beans.factory.annotation.Autowired;
  13. import org.springframework.boot.test.context.SpringBootTest;
  14. import org.springframework.test.context.junit4.SpringRunner;
  15. import java.util.List;
  16. @SpringBootTest
  17. @RunWith(SpringRunner.class)
  18. public class ApArticleTest {
  19. @Autowired
  20. private ApArticleMapper apArticleMapper;
  21. @Autowired
  22. private RestHighLevelClient restHighLevelClient;
  23. /**
  24. * 注意:数据量的导入,如果数据量过大,需要分页导入
  25. * @throws Exception
  26. */
  27. @Test
  28. public void init() throws Exception {
  29. //1.查询所有符合条件的文章数据
  30. List<SearchArticleVo> searchArticleVos = apArticleMapper.loadArticleList();
  31. //2.批量导入到es索引库
  32. BulkRequest bulkRequest = new BulkRequest("app_info_article");
  33. for (SearchArticleVo searchArticleVo : searchArticleVos) {
  34. IndexRequest indexRequest = new IndexRequest().id(searchArticleVo.getId().toString())
  35. .source(JSON.toJSONString(searchArticleVo), XContentType.JSON);
  36. //批量添加数据
  37. bulkRequest.add(indexRequest);
  38. }
  39. restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
  40. }
  41. }

 7.4、app端文章搜索功能实现

接口

①导入 heima-leadnews-search  

②在heima-leadnews-service的pom中添加依赖

  1. <!--elasticsearch-->
  2. <dependency>
  3. <groupId>org.elasticsearch.client</groupId>
  4. <artifactId>elasticsearch-rest-high-level-client</artifactId>
  5. <version>7.4.0</version>
  6. </dependency>
  7. <dependency>
  8. <groupId>org.elasticsearch.client</groupId>
  9. <artifactId>elasticsearch-rest-client</artifactId>
  10. <version>7.4.0</version>
  11. </dependency>
  12. <dependency>
  13. <groupId>org.elasticsearch</groupId>
  14. <artifactId>elasticsearch</artifactId>
  15. <version>7.4.0</version>
  16. </dependency>

 ③nacos配置中心leadnews-search

  1. spring:
  2. autoconfigure:
  3. exclude: org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
  4. elasticsearch:
  5. host: 192.168.200.130
  6. port: 9200

④搜索接口定义

  1. package com.heima.search.controller.v1;
  2. import com.heima.model.common.dtos.ResponseResult;
  3. import com.heima.model.search.dtos.UserSearchDto;
  4. import com.heima.search.service.ArticleSearchService;
  5. import org.springframework.beans.factory.annotation.Autowired;
  6. import org.springframework.web.bind.annotation.PostMapping;
  7. import org.springframework.web.bind.annotation.RequestBody;
  8. import org.springframework.web.bind.annotation.RequestMapping;
  9. import org.springframework.web.bind.annotation.RestController;
  10. import java.io.IOException;
  11. @RestController
  12. @RequestMapping("/api/v1/article/search")
  13. public class ArticleSearchController {
  14. @Autowired
  15. private ArticleSearchService articleSearchService;
  16. @PostMapping("/search")
  17. public ResponseResult search(@RequestBody UserSearchDto dto) throws IOException {
  18. return articleSearchService.search(dto);
  19. }
  20. }

UserSearchDto

  1. package com.heima.model.search.dtos;
  2. import lombok.Data;
  3. import java.util.Date;
  4. @Data
  5. public class UserSearchDto {
  6. /**
  7. * 搜索关键字
  8. */
  9. String searchWords;
  10. /**
  11. * 当前页
  12. */
  13. int pageNum;
  14. /**
  15. * 分页条数
  16. */
  17. int pageSize;
  18. /**
  19. * 最小时间
  20. */
  21. Date minBehotTime;
  22. public int getFromIndex(){
  23. if(this.pageNum<1)return 0;
  24. if(this.pageSize<1) this.pageSize = 10;
  25. return this.pageSize * (pageNum-1);
  26. }
  27. }

⑤创建业务层接口:ApArticleSearchService

  1. package com.heima.search.service;
  2. import com.heima.model.search.dtos.UserSearchDto;
  3. import com.heima.model.common.dtos.ResponseResult;
  4. import java.io.IOException;
  5. public interface ArticleSearchService {
  6. /**
  7. ES文章分页搜索
  8. @return
  9. */
  10. ResponseResult search(UserSearchDto userSearchDto) throws IOException;
  11. }

实现类

  1. package com.heima.search.service.impl;
  2. import com.alibaba.fastjson.JSON;
  3. import com.heima.model.common.dtos.ResponseResult;
  4. import com.heima.model.common.enums.AppHttpCodeEnum;
  5. import com.heima.model.search.dtos.UserSearchDto;
  6. import com.heima.model.user.pojos.ApUser;
  7. import com.heima.search.service.ArticleSearchService;
  8. import com.heima.utils.thread.AppThreadLocalUtil;
  9. import lombok.extern.slf4j.Slf4j;
  10. import org.apache.commons.lang3.StringUtils;
  11. import org.elasticsearch.action.search.SearchRequest;
  12. import org.elasticsearch.action.search.SearchResponse;
  13. import org.elasticsearch.client.RequestOptions;
  14. import org.elasticsearch.client.RestHighLevelClient;
  15. import org.elasticsearch.common.text.Text;
  16. import org.elasticsearch.index.query.*;
  17. import org.elasticsearch.search.SearchHit;
  18. import org.elasticsearch.search.builder.SearchSourceBuilder;
  19. import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
  20. import org.elasticsearch.search.sort.SortOrder;
  21. import org.springframework.beans.factory.annotation.Autowired;
  22. import org.springframework.stereotype.Service;
  23. import java.io.IOException;
  24. import java.util.ArrayList;
  25. import java.util.List;
  26. import java.util.Map;
  27. @Service
  28. @Slf4j
  29. public class ArticleSearchServiceImpl implements ArticleSearchService {
  30. @Autowired
  31. private RestHighLevelClient restHighLevelClient;
  32. /**
  33. * es文章分页检索
  34. *
  35. * @param dto
  36. * @return
  37. */
  38. @Override
  39. public ResponseResult search(UserSearchDto dto) throws IOException {
  40. //1.检查参数
  41. if(dto == null || StringUtils.isBlank(dto.getSearchWords())){
  42. return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID);
  43. }
  44. //2.设置查询条件
  45. SearchRequest searchRequest = new SearchRequest("app_info_article");
  46. SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
  47. //布尔查询
  48. BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
  49. //关键字的分词之后查询
  50. QueryStringQueryBuilder queryStringQueryBuilder = QueryBuilders.queryStringQuery(dto.getSearchWords()).field("title").field("content").defaultOperator(Operator.OR);
  51. boolQueryBuilder.must(queryStringQueryBuilder);
  52. //查询小于mindate的数据
  53. RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("publishTime").lt(dto.getMinBehotTime().getTime());
  54. boolQueryBuilder.filter(rangeQueryBuilder);
  55. //分页查询
  56. searchSourceBuilder.from(0);
  57. searchSourceBuilder.size(dto.getPageSize());
  58. //按照发布时间倒序查询
  59. searchSourceBuilder.sort("publishTime", SortOrder.DESC);
  60. //设置高亮 title
  61. HighlightBuilder highlightBuilder = new HighlightBuilder();
  62. highlightBuilder.field("title");
  63. highlightBuilder.preTags("<font style='color: red; font-size: inherit;'>");
  64. highlightBuilder.postTags("</font>");
  65. searchSourceBuilder.highlighter(highlightBuilder);
  66. searchSourceBuilder.query(boolQueryBuilder);
  67. searchRequest.source(searchSourceBuilder);
  68. SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  69. //3.结果封装返回
  70. List<Map> list = new ArrayList<>();
  71. SearchHit[] hits = searchResponse.getHits().getHits();
  72. for (SearchHit hit : hits) {
  73. String json = hit.getSourceAsString();
  74. Map map = JSON.parseObject(json, Map.class);
  75. //处理高亮
  76. if(hit.getHighlightFields() != null && hit.getHighlightFields().size() > 0){
  77. Text[] titles = hit.getHighlightFields().get("title").getFragments();
  78. String title = StringUtils.join(titles);
  79. //高亮标题
  80. map.put("h_title",title);
  81. }else {
  82. //原始标题
  83. map.put("h_title",map.get("title"));
  84. }
  85. list.add(map);
  86. }
  87. return ResponseResult.okResult(list);
  88. }
  89. }

⑥测试

需要在app的网关中添加搜索微服务的路由配置

  1. #搜索微服务
  2. - id: leadnews-search
  3. uri: lb://leadnews-search
  4. predicates:
  5. - Path=/search/**
  6. filters:
  7. - StripPrefix= 1

7.5、新增文章创建索引

思路

一、文章微服务发送消息

①把SearchArticleVo放到model工程下

  1. package com.heima.model.search.vos;
  2. import lombok.Data;
  3. import java.util.Date;
  4. @Data
  5. public class SearchArticleVo {
  6. // 文章id
  7. private Long id;
  8. // 文章标题
  9. private String title;
  10. // 文章发布时间
  11. private Date publishTime;
  12. // 文章布局
  13. private Integer layout;
  14. // 封面
  15. private String images;
  16. // 作者id
  17. private Long authorId;
  18. // 作者名词
  19. private String authorName;
  20. //静态url
  21. private String staticUrl;
  22. //文章内容
  23. private String content;
  24. }

 ②文章微服务的ArticleFreemarkerService中的buildArticleToMinIO方法中收集数据并发送消息

  1. package com.heima.article.service.impl;
  2. import com.alibaba.fastjson.JSON;
  3. import com.alibaba.fastjson.JSONArray;
  4. import com.baomidou.mybatisplus.core.toolkit.Wrappers;
  5. import com.heima.article.mapper.ApArticleContentMapper;
  6. import com.heima.article.service.ApArticleService;
  7. import com.heima.article.service.ArticleFreemarkerService;
  8. import com.heima.common.constants.ArticleConstants;
  9. import com.heima.file.service.FileStorageService;
  10. import com.heima.model.article.pojos.ApArticle;
  11. import com.heima.model.search.vos.SearchArticleVo;
  12. import freemarker.template.Configuration;
  13. import freemarker.template.Template;
  14. import lombok.extern.slf4j.Slf4j;
  15. import org.apache.commons.lang3.StringUtils;
  16. import org.springframework.beans.BeanUtils;
  17. import org.springframework.beans.factory.annotation.Autowired;
  18. import org.springframework.kafka.core.KafkaTemplate;
  19. import org.springframework.scheduling.annotation.Async;
  20. import org.springframework.stereotype.Service;
  21. import org.springframework.transaction.annotation.Transactional;
  22. import java.io.ByteArrayInputStream;
  23. import java.io.InputStream;
  24. import java.io.StringWriter;
  25. import java.util.HashMap;
  26. import java.util.Map;
  27. @Service
  28. @Slf4j
  29. @Transactional
  30. public class ArticleFreemarkerServiceImpl implements ArticleFreemarkerService {
  31. @Autowired
  32. private ApArticleContentMapper apArticleContentMapper;
  33. @Autowired
  34. private Configuration configuration;
  35. @Autowired
  36. private FileStorageService fileStorageService;
  37. @Autowired
  38. private ApArticleService apArticleService;
  39. /**
  40. * 生成静态文件上传到minIO中
  41. * @param apArticle
  42. * @param content
  43. */
  44. @Async
  45. @Override
  46. public void buildArticleToMinIO(ApArticle apArticle, String content) {
  47. //已知文章的id
  48. //4.1 获取文章内容
  49. if(StringUtils.isNotBlank(content)){
  50. //4.2 文章内容通过freemarker生成html文件
  51. Template template = null;
  52. StringWriter out = new StringWriter();
  53. try {
  54. template = configuration.getTemplate("article.ftl");
  55. //数据模型
  56. Map<String,Object> contentDataModel = new HashMap<>();
  57. contentDataModel.put("content", JSONArray.parseArray(content));
  58. //合成
  59. template.process(contentDataModel,out);
  60. } catch (Exception e) {
  61. e.printStackTrace();
  62. }
  63. //4.3 把html文件上传到minio中
  64. InputStream in = new ByteArrayInputStream(out.toString().getBytes());
  65. String path = fileStorageService.uploadHtmlFile("", apArticle.getId() + ".html", in);
  66. //4.4 修改ap_article表,保存static_url字段
  67. apArticleService.update(Wrappers.<ApArticle>lambdaUpdate().eq(ApArticle::getId,apArticle.getId())
  68. .set(ApArticle::getStaticUrl,path));
  69. //发送消息,创建索引
  70. createArticleESIndex(apArticle,content,path);
  71. }
  72. }
  73. @Autowired
  74. private KafkaTemplate<String,String> kafkaTemplate;
  75. /**
  76. * 送消息,创建索引
  77. * @param apArticle
  78. * @param content
  79. * @param path
  80. */
  81. private void createArticleESIndex(ApArticle apArticle, String content, String path) {
  82. SearchArticleVo vo = new SearchArticleVo();
  83. BeanUtils.copyProperties(apArticle,vo);
  84. vo.setContent(content);
  85. vo.setStaticUrl(path);
  86. kafkaTemplate.send(ArticleConstants.ARTICLE_ES_SYNC_TOPIC, JSON.toJSONString(vo));
  87. }
  88. }

在ArticleConstants类中添加新的常量

  1. package com.heima.common.constants;
  2. public class ArticleConstants {
  3. public static final Short LOADTYPE_LOAD_MORE = 1;
  4. public static final Short LOADTYPE_LOAD_NEW = 2;
  5. public static final String DEFAULT_TAG = "__all__";
  6. public static final String ARTICLE_ES_SYNC_TOPIC = "article.es.sync.topic";
  7. public static final Integer HOT_ARTICLE_LIKE_WEIGHT = 3;
  8. public static final Integer HOT_ARTICLE_COMMENT_WEIGHT = 5;
  9. public static final Integer HOT_ARTICLE_COLLECTION_WEIGHT = 8;
  10. public static final String HOT_ARTICLE_FIRST_PAGE = "hot_article_first_page_";
  11. }

 ③文章微服务集成kafka发送消息

  1. kafka:
  2. bootstrap-servers: 192.168.200.130:9092
  3. producer:
  4. retries: 10
  5. key-serializer: org.apache.kafka.common.serialization.StringSerializer
  6. value-serializer: org.apache.kafka.common.serialization.StringSerializer

 二、搜索微服务接收消息并创建索引

①搜索微服务中添加kafka的配置,nacos配置如下

  1. spring:
  2. kafka:
  3. bootstrap-servers: 192.168.200.130:9092
  4. consumer:
  5. group-id: ${spring.application.name}
  6. key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
  7. value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

②定义监听接收消息,保存索引数据

  1. package com.heima.search.listener;
  2. import com.alibaba.fastjson.JSON;
  3. import com.heima.common.constants.ArticleConstants;
  4. import com.heima.model.search.vos.SearchArticleVo;
  5. import lombok.extern.slf4j.Slf4j;
  6. import org.apache.commons.lang3.StringUtils;
  7. import org.elasticsearch.action.index.IndexRequest;
  8. import org.elasticsearch.client.RequestOptions;
  9. import org.elasticsearch.client.RestHighLevelClient;
  10. import org.elasticsearch.common.xcontent.XContentType;
  11. import org.springframework.beans.factory.annotation.Autowired;
  12. import org.springframework.kafka.annotation.KafkaListener;
  13. import org.springframework.stereotype.Component;
  14. import java.io.IOException;
  15. @Component
  16. @Slf4j
  17. public class SyncArticleListener {
  18. @Autowired
  19. private RestHighLevelClient restHighLevelClient;
  20. @KafkaListener(topics = ArticleConstants.ARTICLE_ES_SYNC_TOPIC)
  21. public void onMessage(String message){
  22. if(StringUtils.isNotBlank(message)){
  23. log.info("SyncArticleListener,message={}",message);
  24. SearchArticleVo searchArticleVo = JSON.parseObject(message, SearchArticleVo.class);
  25. IndexRequest indexRequest = new IndexRequest("app_info_article");
  26. indexRequest.id(searchArticleVo.getId().toString());
  27. indexRequest.source(message, XContentType.JSON);
  28. try {
  29. restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
  30. } catch (IOException e) {
  31. e.printStackTrace();
  32. log.error("sync es error={}",e);
  33. }
  34. }
  35. }
  36. }

7.6、app端搜索-搜索记录

 

  • 展示用户的搜索记录10条,按照搜索关键词的时间倒序

  • 可以删除搜索记录

  • 保存历史记录,保存10条,多余的则删除最久的历史记录

用户的搜索记录,需要给每一个用户都保存一份,数据量较大,要求加载速度快,通常这样的数据存储到mongodb更合适,不建议直接存储到关系型数据库中  

 安装MongoDB

拉取镜像

docker pull mongo

创建容器

docker run -di --name mongo-service --restart=always -p 27017:27017 -v ~/data/mongodata:/data mongo

导入资料中的mongo-demo项目到heima-leadnews-test中

第一:mongo依赖

  1. <dependency>
  2. <groupId>org.springframework.boot</groupId>
  3. <artifactId>spring-boot-starter-data-mongodb</artifactId>
  4. </dependency>

第二:mongo配置  

  1. server:
  2. port: 9998
  3. spring:
  4. data:
  5. mongodb:
  6. host: 192.168.200.130
  7. port: 27017
  8. database: leadnews-history

第三:映射  

  1. package com.itheima.mongo.pojo;
  2. import lombok.Data;
  3. import org.springframework.data.mongodb.core.mapping.Document;
  4. import java.io.Serializable;
  5. import java.util.Date;
  6. /**
  7. * <p>
  8. * 联想词表
  9. * </p>
  10. *
  11. * @author itheima
  12. */
  13. @Data
  14. @Document("ap_associate_words")
  15. public class ApAssociateWords implements Serializable {
  16. private static final long serialVersionUID = 1L;
  17. private String id;
  18. /**
  19. * 联想词
  20. */
  21. private String associateWords;
  22. /**
  23. * 创建时间
  24. */
  25. private Date createdTime;
  26. }

核心方法

  1. package com.itheima.mongo.test;
  2. import com.itheima.mongo.MongoApplication;
  3. import com.itheima.mongo.pojo.ApAssociateWords;
  4. import org.junit.Test;
  5. import org.junit.runner.RunWith;
  6. import org.springframework.beans.factory.annotation.Autowired;
  7. import org.springframework.boot.test.context.SpringBootTest;
  8. import org.springframework.data.domain.Sort;
  9. import org.springframework.data.mongodb.core.MongoTemplate;
  10. import org.springframework.data.mongodb.core.query.Criteria;
  11. import org.springframework.data.mongodb.core.query.Query;
  12. import org.springframework.test.context.junit4.SpringRunner;
  13. import java.util.Date;
  14. import java.util.List;
  15. @SpringBootTest(classes = MongoApplication.class)
  16. @RunWith(SpringRunner.class)
  17. public class MongoTest {
  18. @Autowired
  19. private MongoTemplate mongoTemplate;
  20. //保存
  21. @Test
  22. public void saveTest(){
  23. /*for (int i = 0; i < 10; i++) {
  24. ApAssociateWords apAssociateWords = new ApAssociateWords();
  25. apAssociateWords.setAssociateWords("黑马头条");
  26. apAssociateWords.setCreatedTime(new Date());
  27. mongoTemplate.save(apAssociateWords);
  28. }*/
  29. ApAssociateWords apAssociateWords = new ApAssociateWords();
  30. apAssociateWords.setAssociateWords("黑马直播");
  31. apAssociateWords.setCreatedTime(new Date());
  32. mongoTemplate.save(apAssociateWords);
  33. }
  34. //查询一个
  35. @Test
  36. public void saveFindOne(){
  37. ApAssociateWords apAssociateWords = mongoTemplate.findById("60bd973eb0c1d430a71a7928", ApAssociateWords.class);
  38. System.out.println(apAssociateWords);
  39. }
  40. //条件查询
  41. @Test
  42. public void testQuery(){
  43. Query query = Query.query(Criteria.where("associateWords").is("黑马头条"))
  44. .with(Sort.by(Sort.Direction.DESC,"createdTime"));
  45. List<ApAssociateWords> apAssociateWordsList = mongoTemplate.find(query, ApAssociateWords.class);
  46. System.out.println(apAssociateWordsList);
  47. }
  48. @Test
  49. public void testDel(){
  50. mongoTemplate.remove(Query.query(Criteria.where("associateWords").is("黑马头条")),ApAssociateWords.class);
  51. }
  52. }

7.7、保存搜索记录 

 

 一、搜索微服务集成mongodb

①pom依赖

  1. <dependency>
  2. <groupId>org.springframework.boot</groupId>
  3. <artifactId>spring-boot-starter-data-mongodb</artifactId>
  4. </dependency>

②nacos配置

  1. spring:
  2. data:
  3. mongodb:
  4. host: 192.168.200.130
  5. port: 27017
  6. database: leadnews-history

③用户搜索记录对应的集合,对应实体类:

  1. package com.heima.search.pojos;
  2. import lombok.Data;
  3. import org.springframework.data.mongodb.core.mapping.Document;
  4. import java.io.Serializable;
  5. import java.util.Date;
  6. /**
  7. * <p>
  8. * APP用户搜索信息表
  9. * </p>
  10. * @author itheima
  11. */
  12. @Data
  13. @Document("ap_user_search")
  14. public class ApUserSearch implements Serializable {
  15. private static final long serialVersionUID = 1L;
  16. /**
  17. * 主键
  18. */
  19. private String id;
  20. /**
  21. * 用户ID
  22. */
  23. private Integer userId;
  24. /**
  25. * 搜索词
  26. */
  27. private String keyword;
  28. /**
  29. * 创建时间
  30. */
  31. private Date createdTime;
  32. }

二、创建ApUserSearchService新增insert方法  

  1. public interface ApUserSearchService {
  2. /**
  3. * 保存用户搜索历史记录
  4. * @param keyword
  5. * @param userId
  6. */
  7. public void insert(String keyword,Integer userId);
  8. }

实现类

  1. @Service
  2. @Slf4j
  3. public class ApUserSearchServiceImpl implements ApUserSearchService {
  4. @Autowired
  5. private MongoTemplate mongoTemplate;
  6. /**
  7. * 保存用户搜索历史记录
  8. * @param keyword
  9. * @param userId
  10. */
  11. @Override
  12. @Async
  13. public void insert(String keyword, Integer userId) {
  14. //1.查询当前用户的搜索关键词
  15. Query query = Query.query(Criteria.where("userId").is(userId).and("keyword").is(keyword));
  16. ApUserSearch apUserSearch = mongoTemplate.findOne(query, ApUserSearch.class);
  17. //2.存在 更新创建时间
  18. if(apUserSearch != null){
  19. apUserSearch.setCreatedTime(new Date());
  20. mongoTemplate.save(apUserSearch);
  21. return;
  22. }
  23. //3.不存在,判断当前历史记录总数量是否超过10
  24. apUserSearch = new ApUserSearch();
  25. apUserSearch.setUserId(userId);
  26. apUserSearch.setKeyword(keyword);
  27. apUserSearch.setCreatedTime(new Date());
  28. Query query1 = Query.query(Criteria.where("userId").is(userId));
  29. query1.with(Sort.by(Sort.Direction.DESC,"createdTime"));
  30. List<ApUserSearch> apUserSearchList = mongoTemplate.find(query1, ApUserSearch.class);
  31. if(apUserSearchList == null || apUserSearchList.size() < 10){
  32. mongoTemplate.save(apUserSearch);
  33. }else {
  34. ApUserSearch lastUserSearch = apUserSearchList.get(apUserSearchList.size() - 1);
  35. mongoTemplate.findAndReplace(Query.query(Criteria.where("id").is(lastUserSearch.getId())),apUserSearch);
  36. }
  37. }
  38. }

三、参考自媒体相关微服务,在搜索微服务中获取当前登录的用户

四、在ArticleSearchService的search方法中调用保存历史记录

  1. package com.heima.search.service.impl;
  2. import com.alibaba.fastjson.JSON;
  3. import com.heima.model.common.dtos.ResponseResult;
  4. import com.heima.model.common.enums.AppHttpCodeEnum;
  5. import com.heima.model.search.dtos.UserSearchDto;
  6. import com.heima.model.user.pojos.ApUser;
  7. import com.heima.search.service.ApUserSearchService;
  8. import com.heima.search.service.ArticleSearchService;
  9. import com.heima.utils.thread.AppThreadLocalUtil;
  10. import lombok.extern.slf4j.Slf4j;
  11. import org.apache.commons.lang3.StringUtils;
  12. import org.elasticsearch.action.search.SearchRequest;
  13. import org.elasticsearch.action.search.SearchResponse;
  14. import org.elasticsearch.client.RequestOptions;
  15. import org.elasticsearch.client.RestHighLevelClient;
  16. import org.elasticsearch.common.text.Text;
  17. import org.elasticsearch.index.query.*;
  18. import org.elasticsearch.search.SearchHit;
  19. import org.elasticsearch.search.builder.SearchSourceBuilder;
  20. import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
  21. import org.elasticsearch.search.sort.SortOrder;
  22. import org.springframework.beans.factory.annotation.Autowired;
  23. import org.springframework.stereotype.Service;
  24. import java.io.IOException;
  25. import java.util.ArrayList;
  26. import java.util.List;
  27. import java.util.Map;
  28. @Service
  29. @Slf4j
  30. public class ArticleSearchServiceImpl implements ArticleSearchService {
  31. @Autowired
  32. private RestHighLevelClient restHighLevelClient;
  33. @Autowired
  34. private ApUserSearchService apUserSearchService;
  35. /**
  36. * es文章分页检索
  37. *
  38. * @param dto
  39. * @return
  40. */
  41. @Override
  42. public ResponseResult search(UserSearchDto dto) throws IOException {
  43. //1.检查参数
  44. if(dto == null || StringUtils.isBlank(dto.getSearchWords())){
  45. return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID);
  46. }
  47. ApUser user = AppThreadLocalUtil.getUser();
  48. //异步调用 保存搜索记录
  49. if(user != null && dto.getFromIndex() == 0){
  50. apUserSearchService.insert(dto.getSearchWords(), user.getId());
  51. }
  52. //2.设置查询条件
  53. SearchRequest searchRequest = new SearchRequest("app_info_article");
  54. SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
  55. //布尔查询
  56. BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
  57. //关键字的分词之后查询
  58. QueryStringQueryBuilder queryStringQueryBuilder = QueryBuilders.queryStringQuery(dto.getSearchWords()).field("title").field("content").defaultOperator(Operator.OR);
  59. boolQueryBuilder.must(queryStringQueryBuilder);
  60. //查询小于mindate的数据
  61. RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("publishTime").lt(dto.getMinBehotTime().getTime());
  62. boolQueryBuilder.filter(rangeQueryBuilder);
  63. //分页查询
  64. searchSourceBuilder.from(0);
  65. searchSourceBuilder.size(dto.getPageSize());
  66. //按照发布时间倒序查询
  67. searchSourceBuilder.sort("publishTime", SortOrder.DESC);
  68. //设置高亮 title
  69. HighlightBuilder highlightBuilder = new HighlightBuilder();
  70. highlightBuilder.field("title");
  71. highlightBuilder.preTags("<font style='color: red; font-size: inherit;'>");
  72. highlightBuilder.postTags("</font>");
  73. searchSourceBuilder.highlighter(highlightBuilder);
  74. searchSourceBuilder.query(boolQueryBuilder);
  75. searchRequest.source(searchSourceBuilder);
  76. SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  77. //3.结果封装返回
  78. List<Map> list = new ArrayList<>();
  79. SearchHit[] hits = searchResponse.getHits().getHits();
  80. for (SearchHit hit : hits) {
  81. String json = hit.getSourceAsString();
  82. Map map = JSON.parseObject(json, Map.class);
  83. //处理高亮
  84. if(hit.getHighlightFields() != null && hit.getHighlightFields().size() > 0){
  85. Text[] titles = hit.getHighlightFields().get("title").getFragments();
  86. String title = StringUtils.join(titles);
  87. //高亮标题
  88. map.put("h_title",title);
  89. }else {
  90. //原始标题
  91. map.put("h_title",map.get("title"));
  92. }
  93. list.add(map);
  94. }
  95. return ResponseResult.okResult(list);
  96. }
  97. }

五、保存历史记录中开启异步调用,添加注解@Async

六、在搜索微服务引导类上开启异步调用

7.8、加载搜索记录列表

 ①controller

  1. /**
  2. * <p>
  3. * APP用户搜索信息表 前端控制器
  4. * </p>
  5. * @author itheima
  6. */
  7. @Slf4j
  8. @RestController
  9. @RequestMapping("/api/v1/history")
  10. public class ApUserSearchController{
  11. @Autowired
  12. private ApUserSearchService apUserSearchService;
  13. @PostMapping("/load")
  14. public ResponseResult findUserSearch() {
  15. return apUserSearchService.findUserSearch();
  16. }
  17. }

②service

在ApUserSearchService中新增方法  

  1. /**
  2. 查询搜索历史
  3. @return
  4. */
  5. ResponseResult findUserSearch();

实现

  1. /**
  2. * 查询搜索历史
  3. *
  4. * @return
  5. */
  6. @Override
  7. public ResponseResult findUserSearch() {
  8. //获取当前用户
  9. ApUser user = AppThreadLocalUtil.getUser();
  10. if(user == null){
  11. return ResponseResult.errorResult(AppHttpCodeEnum.NEED_LOGIN);
  12. }
  13. //根据用户查询数据,按照时间倒序
  14. List<ApUserSearch> apUserSearches = mongoTemplate.find(Query.query(Criteria.where("userId").is(user.getId())).with(Sort.by(Sort.Direction.DESC, "createdTime")), ApUserSearch.class);
  15. return ResponseResult.okResult(apUserSearches);
  16. }

 7.9、删除搜索记录

①在ApUserSearchController接口新增方法

  1. @PostMapping("/del")
  2. public ResponseResult delUserSearch(@RequestBody HistorySearchDto historySearchDto) {
  3. return apUserSearchService.delUserSearch(historySearchDto);
  4. }

HistorySearchDto

  1. @Data
  2. public class HistorySearchDto {
  3. /**
  4. * 接收搜索历史记录id
  5. */
  6. String id;
  7. }

②在ApUserSearchService中新增方法

  1. /**
  2. 删除搜索历史
  3. @param historySearchDto
  4. @return
  5. */
  6. ResponseResult delUserSearch(HistorySearchDto historySearchDto);

 实现

  1. /**
  2. * 删除历史记录
  3. *
  4. * @param dto
  5. * @return
  6. */
  7. @Override
  8. public ResponseResult delUserSearch(HistorySearchDto dto) {
  9. //1.检查参数
  10. if(dto.getId() == null){
  11. return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID);
  12. }
  13. //2.判断是否登录
  14. ApUser user = AppThreadLocalUtil.getUser();
  15. if(user == null){
  16. return ResponseResult.errorResult(AppHttpCodeEnum.NEED_LOGIN);
  17. }
  18. //3.删除
  19. mongoTemplate.remove(Query.query(Criteria.where("userId").is(user.getId()).and("id").is(dto.getId())),ApUserSearch.class);
  20. return ResponseResult.okResult(AppHttpCodeEnum.SUCCESS);
  21. }

 7.10、关键字联想词

 需求

  • 根据用户输入的关键字展示联想词

搜索词-数据来源

 联想词对应实体类 

  1. package com.heima.search.pojos;
  2. import lombok.Data;
  3. import org.springframework.data.mongodb.core.mapping.Document;
  4. import java.io.Serializable;
  5. import java.util.Date;
  6. /**
  7. * <p>
  8. * 联想词表
  9. * </p>
  10. *
  11. * @author itheima
  12. */
  13. @Data
  14. @Document("ap_associate_words")
  15. public class ApAssociateWords implements Serializable {
  16. private static final long serialVersionUID = 1L;
  17. private String id;
  18. /**
  19. * 联想词
  20. */
  21. private String associateWords;
  22. /**
  23. * 创建时间
  24. */
  25. private Date createdTime;
  26. }

通常是网上搜索频率比较高的一些词,通常在企业中有两部分来源:

第一:自己维护搜索词

通过分析用户搜索频率较高的词,按照排名作为搜索词

第二:第三方获取

关键词规划师(百度)、5118、爱站网

功能实现

 ①新建联想词控制器ApAssociateWordsControlle

  1. package com.heima.search.controller.v1;
  2. import com.heima.model.common.dtos.ResponseResult;
  3. import com.heima.model.search.dtos.UserSearchDto;
  4. import com.heima.search.service.ApAssociateWordsService;
  5. import lombok.extern.slf4j.Slf4j;
  6. import org.springframework.beans.factory.annotation.Autowired;
  7. import org.springframework.web.bind.annotation.PostMapping;
  8. import org.springframework.web.bind.annotation.RequestBody;
  9. import org.springframework.web.bind.annotation.RequestMapping;
  10. import org.springframework.web.bind.annotation.RestController;
  11. /**
  12. * <p>
  13. * 联想词表 前端控制器
  14. * </p>
  15. * @author itheima
  16. */
  17. @Slf4j
  18. @RestController
  19. @RequestMapping("/api/v1/associate")
  20. public class ApAssociateWordsController{
  21. @Autowired
  22. private ApAssociateWordsService apAssociateWordsService;
  23. @PostMapping("/search")
  24. public ResponseResult findAssociate(@RequestBody UserSearchDto userSearchDto) {
  25. return apAssociateWordsService.findAssociate(userSearchDto);
  26. }
  27. }

②新建联想词业务层接口ApAssociateWordsService 

  1. package com.heima.search.service;
  2. import com.heima.model.common.dtos.ResponseResult;
  3. import com.heima.model.search.dtos.UserSearchDto;
  4. /**
  5. * <p>
  6. * 联想词表 服务类
  7. * </p>
  8. *
  9. * @author itheima
  10. */
  11. public interface ApAssociateWordsService {
  12. /**
  13. 联想词
  14. @param userSearchDto
  15. @return
  16. */
  17. ResponseResult findAssociate(UserSearchDto userSearchDto);
  18. }

实现

  1. package com.heima.search.service.impl;
  2. import com.heima.model.common.dtos.ResponseResult;
  3. import com.heima.model.common.enums.AppHttpCodeEnum;
  4. import com.heima.model.search.dtos.UserSearchDto;
  5. import com.heima.search.pojos.ApAssociateWords;
  6. import com.heima.search.service.ApAssociateWordsService;
  7. import org.apache.commons.lang3.StringUtils;
  8. import org.springframework.beans.factory.annotation.Autowired;
  9. import org.springframework.data.mongodb.core.MongoTemplate;
  10. import org.springframework.data.mongodb.core.query.Criteria;
  11. import org.springframework.data.mongodb.core.query.Query;
  12. import org.springframework.stereotype.Service;
  13. import java.util.List;
  14. /**
  15. * @Description:
  16. * @Version: V1.0
  17. */
  18. @Service
  19. public class ApAssociateWordsServiceImpl implements ApAssociateWordsService {
  20. @Autowired
  21. MongoTemplate mongoTemplate;
  22. /**
  23. * 联想词
  24. * @param userSearchDto
  25. * @return
  26. */
  27. @Override
  28. public ResponseResult findAssociate(UserSearchDto userSearchDto) {
  29. //1 参数检查
  30. if(userSearchDto == null || StringUtils.isBlank(userSearchDto.getSearchWords())){
  31. return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID);
  32. }
  33. //分页检查
  34. if (userSearchDto.getPageSize() > 20) {
  35. userSearchDto.setPageSize(20);
  36. }
  37. //3 执行查询 模糊查询
  38. Query query = Query.query(Criteria.where("associateWords").regex(".*?\\" + userSearchDto.getSearchWords() + ".*"));
  39. query.limit(userSearchDto.getPageSize());
  40. List<ApAssociateWords> wordsList = mongoTemplate.find(query, ApAssociateWords.class);
  41. return ResponseResult.okResult(wordsList);
  42. }
  43. }

八、热点文章计算--定时计算

需求

目前实现的思路:从数据库直接按照发布时间倒序查询

​​​​​ 问题1:

如何访问量较大,直接查询数据库,压力较大

问题2:

新发布的文章会展示在前面,并不是热点文章

 实现思路

把热点数据存入redis进行展示

判断文章是否是热点,有几项标准: 点赞数量,评论数量,阅读数量,收藏数量

计算文章热度,有两种方案:

  • 定时计算文章热度

  • 实时计算文章热度

  • 根据文章的行为(点赞、评论、阅读、收藏)计算文章的分值,利用定时任务每天完成一次计算

  • 把分值较大的文章数据存入到redis中

  • App端用户查询文章列表的时候,优先从redis中查询热度较高的文章数据

 8.1、定时任务框架-xxljob

spring传统的定时任务@Scheduled,但是这样存在这一些问题 :

  • 做集群任务的重复执行问题,可以使用分布式锁解决,但是较麻烦

  • cron表达式定义在代码之中,修改不方便

  • 定时任务失败了,无法重试也没有统计

  • 如果任务量过大,不能有效的分片执行

解决这些问题的方案为:

xxl-job 分布式任务调度框架

8.2、分布式任务调度

当前软件的架构已经开始向分布式架构转变,将单体结构拆分为若干服务,服务之间通过网络交互来完成业务处理。在分布式架构下,一个服务往往会部署多个实例来运行我们的业务,如果在这种分布式系统环境下运行任务调度,我们称之为分布式任务调度

将任务调度程序分布式构建,这样就可以具有分布式系统的特点,并且提高任务的调度处理能力:

1、并行任务调度

并行任务调度实现靠多线程,如果有大量任务需要调度,此时光靠多线程就会有瓶颈了,因为一台计算机CPU的处理能力是有限的。

如果将任务调度程序分布式部署,每个结点还可以部署为集群,这样就可以让多台计算机共同去完成任务调度,我们可以将任务分割为若干个分片,由不同的实例并行执行,来提高任务调度的处理效率。

2、高可用

若某一个实例宕机,不影响其他实例来执行任务。

3、弹性扩容

当集群中增加实例就可以提高并执行任务的处理效率。

4、任务管理与监测

对系统中存在的所有定时任务进行统一的管理及监测。让开发人员及运维人员能够时刻了解任务执行情况,从而做出快速的应急处理响应。

分布式任务调度面临的问题

当任务调度以集群方式部署,同一个任务调度可能会执行多次,例如:电商系统定期发放优惠券,就可能重复发放优惠券,对公司造成损失,信用卡还款提醒就会重复执行多次,给用户造成烦恼,所以我们需要控制相同的任务在多个运行实例上只执行一次。常见解决方案:

  • 分布式锁,多个实例在任务执行前首先需要获取锁,如果获取失败那么就证明有其他服务已经在运行,如果获取成功那么证明没有服务在运行定时任务,那么就可以执行。

  • ZooKeeper选举,利用ZooKeeper对Leader实例执行定时任务,执行定时任务的时候判断自己是否是Leader,如果不是则不执行,如果是则执行业务逻辑,这样也能达到目的。

本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号