当前位置:   article > 正文

参考文献_lin y, liu z, sun m, et al. learning entity and re

lin y, liu z, sun m, et al. learning entity and relation embeddings for know

##记录一下自己HS归类的参考文献吧(开题内容)~~~~~~~~~~~~~~

基于知识增强语义表示模型的海关商品归类研究

#主要从HS自动归类、基于神经网络的文本分类、知识表示学习三个部分进行阐述。

  1. HS自动归类;
    [1] 谢维,李银胜,邵永臻,吴晓彦.HS编码查询知识库设计与实现[J].计算机应用与软件, 2008(08):143-146.
    [2] Liya Ding;ZhenZhen Fan;DongLiang Chen. Auto-Categorization of HS Code Using Background Net Approach[J]. Procedia Computer Science, 2015, Vol.60:1462-1471.
    [3] 张紫玄,王昊,朱立平,邓三鸿.中国海关HS编码风险的识别研究[J].数据分析与知识发现,2019,3(01):72-84.
    [4] 许重建,李险峰.基于深度学习的HS Code产品归类方法研究[J].现代计算机(专业版),2019(01):11-19.
  2. 基于神经网络的文本分类
    [1] Joulin A, Grave E, Bojanowski P, et al. Bag of tricks for efficient text classification[J]. arXiv preprint arXiv:1607.01759, 2016,
    [2] Yoon Kim. Convolutional Neural Networks for Sentence Classification[J]. Computer Science, 2014,
    [3] Xiang Zhang;Junbo Zhao;Yann LeCun. Character-level Convolutional Networks for Text Classification[J]. Computer Science, 2015,
    [4] Nal Kalchbrenner;Edward Grefenstette;Phil Blunsom. A Convolutional Neural Network for Modelling Sentences[J]. Computer Science, 2014,
    [5] Ye Zhang;Byron Wallace. A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification[J]. Computer Science, 2015,
    [6] Pengfei Liu;Xipeng Qiu;Xuanjing Huang. Recurrent neural network for text classification with multi-task learning [arXiv][J]. arXiv, 2016, :7
    [7] Zhou P, Shi W, Tian J, et al. Attention-based bidirectional long short-term memory networks for relation classification[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2016: 207-212.
    [8] Miyato T, Dai A M, Goodfellow I. Adversarial training methods for semi-supervised text classification[J]. arXiv preprint arXiv:1605.07725, 2016.
    [9] Siwei Lai;Liheng Xu;Kang Liu;Jun Zhao. Recurrent Convolutional Neural Networks for Text Classification[J]. In Proceedings of the 29th AAAI Conference on Artificial Intelligence ( AAAI2015, , ( )
    [10] Yang Z, Yang D, Dyer C, et al. Hierarchical attention networks for document classification[C]//Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies. 2016: 1480-1489.
    [11] Dzmitry Bahdanau;Kyunghyun Cho;Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate[J]. STATISTICS, 2014, Vol.3( )
    [12] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin.Attention Is All You Need[J]. arXiv preprint arXiv:1706.03762,2017.
    [13] Peters M E, Neumann M, Iyyer M, et al. Deep contextualized word representations[J]. arXiv preprint arXiv:1802.05365, 2018.
    [14] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.
    [15] Zhang Z, Han X, Liu Z, et al. ERNIE: Enhanced Language Representation with Informative Entities[J]. arXiv preprint arXiv:1905.07129, 2019.
    [16] Sun Y, Wang S, Li Y, et al. ERNIE: Enhanced Representation through Knowledge Integration[J]. arXiv preprint arXiv:1904.09223, 2019.
  3. 知识表示学习
    [1] A Bordes;N Usunier;A Garcia-Duran;J Weston;O Yakhnenko. Translating Embeddings for Modeling Multi-…[J]. 《Advances in Neural Information Processing Syst…, 2013, :2787-2795.
    [2] Wang Z, Zhang J, Feng J, et al. Knowledge graph embedding by translating on hyperplanes[C]//Twenty-Eighth AAAI conference on artificial intelligence. 2014.
    [3] 方阳,赵翔,谭真,杨世宇,肖卫东.一种改进的基于翻译的知识图谱表示方法[J].计算机研究与发展,2018,55(01):139-150.
    [4] 陈文杰,文奕,张鑫,杨宁,赵爽.一种改进的基于TransE的知识图谱表示方法[J/OL].计算机工程:1-8[2019-10-05].https://doi.org/10.19678/j.issn.1000-3428.0054196.
    [5] Han Xiao;Minlie Huang;Yu Hao;Xiaoyan Zhu. TransA: An Adaptive Approach for Knowledge Graph Embedding[J]. Computer Science, 2015,
    [6] Han Xiao;Minlie Huang;Yu Hao;Xiaoyan Zhu. TransG : A Generative Mixture Model for Knowledge Graph Embedding[J]. Computer Science, 2015,
    [7] Lin Y, Liu Z, Sun M, et al. Learning entity and relation embeddings for knowledge graph completion[C]//Twenty-ninth AAAI conference on artificial intelligence. 2015.
    [8] G Ji;S He;L Xu;K Liu;J Zhao. Knowledge Graph Embedding via Dynamic Mapping Matrix[J]. Meeting of the Association for Computational Li…, 2015,
    [9] Guoliang Ji;Kang Liu;Shizhu He;Jun Zhao. Knowledge Graph Completion with Adaptive Sparse Transfer Matrix[J]. The Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), null,
    [10] Wang, Quan;Mao, Zhendong;Wang等. Knowledge Graph Embedding: A Survey of Approaches and Applications[J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2017, Vol.29(12):2724-2743.
    [11] Fan M, Zhou Q, Chang E, et al. Transition-based knowledge graph embedding with relational mapping properties[C]//Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing. 2014: 328-337.Jun Feng;Mantong Zhou;Yu Hao;Minlie Huang;Xiaoyan Zhu. Knowlege Graph Embedding by Flexible Translation[J]. Computer Science, 2015,
声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号