赞
踩
·支持中文分词(N-最短路分词、CRF分词、索引分词、用户自定义词典、词性标注),命名实体识别(中国人名、音译人名、日本人名、地名、实体机构名识别),关键词提取,自动摘要,短语提取,拼音转换,简繁转换,文本推荐,依存句法分析(MaxEnt依存句法分析、CRF依存句法分析)
环境要求
java 1.8
nodejs >= 6
docker
·build image
cd node-hanlp
./scripts/build-docker-image.sh
Or pull image
docker pull samurais/hanlp-api:1.0.0
· start container
docker run -it --rm -p 3002:3000 samurais/hanlp-api:1.0.0
·access service
POST /tokenizer HTTP/1.1
Host: localhost:3002
Content-Type: application/json
{
"type": "nlp",
"content": "刘德华和张学友创作了很多流行歌曲"
}
RESPONSE
{
"status": "success",
"data": [
{
"word": "刘德华",
"nature": "nr",
"offset": 0
},
{
"word": "和",
"nature": "cc",
"offset": 0
},
{
"word": "张学友",
"nature": "nr",
"offset": 0
},
{
"word": "创作",
"nature": "v",
"offset": 0
},
{
"word": "了",
"nature": "ule",
"offset": 0
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。