赞
踩
Elasticsearch的关键就是倒排索引,而倒排索引依赖于对文档内容的分词,而分词则需要高效、精准的分词算法,IK分词器就是这样一个中文分词算法。
方案一:在线安装
运行一个命令即可:
docker exec -it es ./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.12.1/elasticsearch-analysis-ik-7.12.1.zip
然后重启es容器:
docker restart es
方案二:离线安装
如果网速较差,也可以选择离线安装。
首先,查看之前安装的Elasticsearch容器的plugins数据卷目录:
docker volume inspect es-plugins
结果如下:
[
{
"CreatedAt": "2024-11-06T10:06:34+08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/es-plugins/_data",
"Name": "es-plugins",
"Options": null,
"Scope": "local"
}
]
可以看到elasticsearch的插件挂载到了/var/lib/docker/volumes/es-plugins/_data
这个目录。我们需要把IK分词器上传至这个目录。
找到课前资料提供的ik分词器插件,课前资料提供了7.12.1
版本的ik分词器压缩文件,你需要对其解压:
然后上传至虚拟机的/var/lib/docker/volumes/es-plugins/_data
这个目录:
最后,重启es容器:
docker restart es
IK分词器包含两种模式:
ik_smart
:智能语义切分ik_max_word
:最细粒度切分我们在Kibana的DevTools上来测试分词器,首先测试Elasticsearch官方提供的标准分词器:
POST /_analyze
{
"analyzer": "standard",
"text": "真知程序员学习java太棒了"
}
结果如下:
{ "tokens" : [ { "token" : "真", "start_offset" : 0, "end_offset" : 1, "type" : "<IDEOGRAPHIC>", "position" : 0 }, { "token" : "知", "start_offset" : 1, "end_offset" : 2, "type" : "<IDEOGRAPHIC>", "position" : 1 }, { "token" : "程", "start_offset" : 2, "end_offset" : 3, "type" : "<IDEOGRAPHIC>", "position" : 2 }, { "token" : "序", "start_offset" : 3, "end_offset" : 4, "type" : "<IDEOGRAPHIC>", "position" : 3 }, { "token" : "员", "start_offset" : 4, "end_offset" : 5, "type" : "<IDEOGRAPHIC>", "position" : 4 }, { "token" : "学", "start_offset" : 5, "end_offset" : 6, "type" : "<IDEOGRAPHIC>", "position" : 5 }, { "token" : "习", "start_offset" : 6, "end_offset" : 7, "type" : "<IDEOGRAPHIC>", "position" : 6 }, { "token" : "java", "start_offset" : 7, "end_offset" : 11, "type" : "<ALPHANUM>", "position" : 7 }, { "token" : "太", "start_offset" : 11, "end_offset" : 12, "type" : "<IDEOGRAPHIC>", "position" : 8 }, { "token" : "棒", "start_offset" : 12, "end_offset" : 13, "type" : "<IDEOGRAPHIC>", "position" : 9 }, { "token" : "了", "start_offset" : 13, "end_offset" : 14, "type" : "<IDEOGRAPHIC>", "position" : 10 } ] }
可以看到,标准分词器智能1字1词条,无法正确对中文做分词。
我们再测试IK分词器:
POST /_analyze
{
"analyzer": "ik_smart",
"text": "真知程序员学习java太棒了"
}
执行结果如下:
{ "tokens" : [ { "token" : "真知", "start_offset" : 0, "end_offset" : 2, "type" : "CN_WORD", "position" : 0 }, { "token" : "程序员", "start_offset" : 2, "end_offset" : 5, "type" : "CN_WORD", "position" : 1 }, { "token" : "学习", "start_offset" : 5, "end_offset" : 7, "type" : "CN_WORD", "position" : 2 }, { "token" : "java", "start_offset" : 7, "end_offset" : 11, "type" : "ENGLISH", "position" : 3 }, { "token" : "太棒了", "start_offset" : 11, "end_offset" : 14, "type" : "CN_WORD", "position" : 4 } ] }
我们再测试IK分词器:
POST /_analyze
{
"analyzer": "ik_max_word",
"text": "真知程序员学习java太棒了"
}
执行结果如下:
{ "tokens" : [ { "token" : "真知", "start_offset" : 0, "end_offset" : 2, "type" : "CN_WORD", "position" : 0 }, { "token" : "程序员", "start_offset" : 2, "end_offset" : 5, "type" : "CN_WORD", "position" : 1 }, { "token" : "程序", "start_offset" : 2, "end_offset" : 4, "type" : "CN_WORD", "position" : 2 }, { "token" : "员", "start_offset" : 4, "end_offset" : 5, "type" : "CN_CHAR", "position" : 3 }, { "token" : "学习", "start_offset" : 5, "end_offset" : 7, "type" : "CN_WORD", "position" : 4 }, { "token" : "java", "start_offset" : 7, "end_offset" : 11, "type" : "ENGLISH", "position" : 5 }, { "token" : "太棒了", "start_offset" : 11, "end_offset" : 14, "type" : "CN_WORD", "position" : 6 }, { "token" : "太棒", "start_offset" : 11, "end_offset" : 13, "type" : "CN_WORD", "position" : 7 }, { "token" : "了", "start_offset" : 13, "end_offset" : 14, "type" : "CN_CHAR", "position" : 8 } ] }
随着互联网的发展,“造词运动”也越发的频繁。出现了很多新的词语,在原有的词汇列表中并不存在。比如:“泰裤辣”,“传智播客” 等。
IK分词器无法对这些词汇分词,测试一下:
POST /_analyze
{
"analyzer": "ik_max_word",
"text": "真知博客开设大学,真的泰裤辣!"
}
结果:
{ "tokens" : [ { "token" : "真", "start_offset" : 0, "end_offset" : 1, "type" : "CN_CHAR", "position" : 0 }, { "token" : "知", "start_offset" : 1, "end_offset" : 2, "type" : "CN_CHAR", "position" : 1 }, { "token" : "播", "start_offset" : 2, "end_offset" : 3, "type" : "CN_CHAR", "position" : 2 }, { "token" : "客", "start_offset" : 3, "end_offset" : 4, "type" : "CN_CHAR", "position" : 3 }, { "token" : "开设", "start_offset" : 4, "end_offset" : 6, "type" : "CN_WORD", "position" : 4 }, { "token" : "大学", "start_offset" : 6, "end_offset" : 8, "type" : "CN_WORD", "position" : 5 }, { "token" : "真的", "start_offset" : 9, "end_offset" : 11, "type" : "CN_WORD", "position" : 6 }, { "token" : "泰", "start_offset" : 11, "end_offset" : 12, "type" : "CN_CHAR", "position" : 7 }, { "token" : "裤", "start_offset" : 12, "end_offset" : 13, "type" : "CN_CHAR", "position" : 8 }, { "token" : "辣", "start_offset" : 13, "end_offset" : 14, "type" : "CN_CHAR", "position" : 9 } ] }
可以看到,真知博客
和泰裤辣
都无法正确分词。
所以要想正确分词,IK分词器的词库也需要不断的更新,IK分词器提供了扩展词汇的功能。
1)打开IK分词器config目录:
注意,如果采用在线安装的通过,默认是没有config目录的,需要把课前资料提供的ik下的config上传至对应目录。
2)在IKAnalyzer.cfg.xml配置文件内容添加:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>IK Analyzer 扩展配置</comment>
<!--用户可以在这里配置自己的扩展字典 *** 添加扩展词典-->
<entry key="ext_dict">ext.dic</entry>
</properties>
3)在IK分词器的config目录新建一个 ext.dic
,可以参考config目录下复制一个配置文件进行修改
真知博客
泰裤辣
Oh...Yeah!!!
然后修改stopword.dic文件
a an and are as at be but by for if in into is it no not of on or such that the their then there these they this to was will with 啦 啊 哦 呀 的 了
最后将修改后的文件覆盖之前的文件即可
4)重启elasticsearch
docker restart es
# 查看 日志
docker logs -f elasticsearch
再次测试,可以发现真知博客
和泰裤辣
都正确分词了:
{ "tokens" : [ { "token" : "真知播客", "start_offset" : 0, "end_offset" : 4, "type" : "CN_WORD", "position" : 0 }, { "token" : "开设", "start_offset" : 4, "end_offset" : 6, "type" : "CN_WORD", "position" : 1 }, { "token" : "大学", "start_offset" : 6, "end_offset" : 8, "type" : "CN_WORD", "position" : 2 }, { "token" : "真的", "start_offset" : 9, "end_offset" : 11, "type" : "CN_WORD", "position" : 3 }, { "token" : "泰裤辣", "start_offset" : 11, "end_offset" : 14, "type" : "CN_WORD", "position" : 4 } ] }
其他测试
POST /_analyze
{
"analyzer": "ik_max_word",
"text": "真知程序员来自于真知播客,网上课程学习java太棒了,真是泰裤辣,Oh...Yeah!!!"
}
返回结果也比较Ok
{ "tokens" : [ { "token" : "真知", "start_offset" : 0, "end_offset" : 2, "type" : "CN_WORD", "position" : 0 }, { "token" : "程序员", "start_offset" : 2, "end_offset" : 5, "type" : "CN_WORD", "position" : 1 }, { "token" : "程序", "start_offset" : 2, "end_offset" : 4, "type" : "CN_WORD", "position" : 2 }, { "token" : "员", "start_offset" : 4, "end_offset" : 5, "type" : "CN_CHAR", "position" : 3 }, { "token" : "来自于", "start_offset" : 5, "end_offset" : 8, "type" : "CN_WORD", "position" : 4 }, { "token" : "来自", "start_offset" : 5, "end_offset" : 7, "type" : "CN_WORD", "position" : 5 }, { "token" : "于", "start_offset" : 7, "end_offset" : 8, "type" : "CN_CHAR", "position" : 6 }, { "token" : "真知播客", "start_offset" : 8, "end_offset" : 12, "type" : "CN_WORD", "position" : 7 }, { "token" : "网上", "start_offset" : 13, "end_offset" : 15, "type" : "CN_WORD", "position" : 8 }, { "token" : "上课", "start_offset" : 14, "end_offset" : 16, "type" : "CN_WORD", "position" : 9 }, { "token" : "课程", "start_offset" : 15, "end_offset" : 17, "type" : "CN_WORD", "position" : 10 }, { "token" : "学习", "start_offset" : 17, "end_offset" : 19, "type" : "CN_WORD", "position" : 11 }, { "token" : "java", "start_offset" : 19, "end_offset" : 23, "type" : "ENGLISH", "position" : 12 }, { "token" : "太棒了", "start_offset" : 23, "end_offset" : 26, "type" : "CN_WORD", "position" : 13 }, { "token" : "太棒", "start_offset" : 23, "end_offset" : 25, "type" : "CN_WORD", "position" : 14 }, { "token" : "了", "start_offset" : 25, "end_offset" : 26, "type" : "CN_CHAR", "position" : 15 }, { "token" : "真是", "start_offset" : 27, "end_offset" : 29, "type" : "CN_WORD", "position" : 16 }, { "token" : "泰裤辣", "start_offset" : 29, "end_offset" : 32, "type" : "CN_WORD", "position" : 17 }, { "token" : "oh...yeah", "start_offset" : 33, "end_offset" : 42, "type" : "LETTER", "position" : 18 }, { "token" : "oh", "start_offset" : 33, "end_offset" : 35, "type" : "ENGLISH", "position" : 19 }, { "token" : "yeah", "start_offset" : 38, "end_offset" : 42, "type" : "ENGLISH", "position" : 20 } ] }
分词器的作用是什么?
IK分词器有几种模式?
ik_smart
:智能切分,粗粒度ik_max_word
:最细切分,细粒度IK分词器如何拓展词条?如何停用词条?
IkAnalyzer.cfg.xml
文件添加拓展词典和停用词典Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。