当前位置:   article > 正文

KDD2023丨表征学习论文合集_dcdetector: dual attention contrastive representat

dcdetector: dual attention contrastive representation learning for time seri

ACM SIGKDD(国际数据挖掘与知识发现大会,简称KDD)会议始于1989年,是数据挖掘领域历史最悠久、规模最大的国际顶级学术会议,也是首个引入大数据、数据科学、预测分析、众包等概念的会议,每年吸引了大量数据挖掘、机器学习、大数据和人工智能等领域的研究学者、从业人员参与。

AMiner通过AI技术,对 KDD2023 收录的会议论文进行了分类整理,今日分享的是表征学习主题论文!(由于篇幅关系,本篇只展现部分论文,点击阅读原文可直达KDD顶会页面查看所有论文)

1.DCdetector: Dual Attention Contrastive Representation Learning for Time Series Anomaly Detection

链接:https://www.aminer.cn/pub/6492753bd68f896efa888f46/

2.GENERALIZED MATRIX LOCAL LOW RANK REPRESENTATION BY RANDOM PROJECTION AND SUBMATRIX PROPAGATION

链接:https://www.aminer.cn/pub/6433f6bc90e50fcafd6efdfd/

3.Joint Pre-training and Local Re-training: Transferable Representation Learning on Multi-source Knowledge Graphs

链接:https://www.aminer.cn/pub/647eaf51d68f896efad41d32/

4.Task Relation-aware Continual User Representation Learning

链接:https://www.aminer.cn/pub/647eaf35d68f896efad40763/

5.Dense Representation Learning and Retrieval for Tabular Data Prediction

链接:https://www.aminer.cn/pub/64af99fd3fda6d7f065a62e9/

6.Efficient and Effective Edge-wise Graph Representation Learning

链接:https://www.aminer.cn/pub/64af99fe3fda6d7f065a63b4/

7.CARL-G: Clustering-Accelerated Representation Learning on Graphs

链接:https://www.aminer.cn/pub/64af99fe3fda6d7f065a63ce/

8.LightPath: Lightweight and Scalable Path Representation Learning

链接:https://www.aminer.cn/pub/64af9a0b3fda6d7f065a70cd/

9.Urban Region Representation Learning with OpenStreetMap Building Footprints

链接:https://www.aminer.cn/pub/64af9a0b3fda6d7f065a70d1/

10.Representation Learning on Hyper-Relational and Numeric Knowledge Graphs with Transformers

链接:https://www.aminer.cn/pub/647572e0d68f896efa7b7983/

11.Expert Knowledge-Aware Image Difference Graph Representation Learning for Difference-Aware Medical Visual Question Answering

链接:https://www.aminer.cn/pub/64af9a023fda6d7f065a686d/

12.DyTed: Disentangled Representation Learning for Discrete-time Dynamic Graph

链接:https://www.aminer.cn/pub/64af9a093fda6d7f065a6eac/

13.Heterformer: Transformer-based Deep Node Representation Learning on Heterogeneous Text-Rich Networks

链接:https://www.aminer.cn/pub/64af9a093fda6d7f065a6eb0/


如何使用ChatPaper读文献?

为了让更多科研人更高效的获取文献知识,AMiner基于GLM-130B大模型能力,开发了Chatpaper,帮助科研人快速提高检索、阅读论文效率,获取最新领域研究动态,让科研工作更加游刃有余。
在这里插入图片描述

ChatPaper是一款集检索、阅读、知识问答于一体的对话式私有知识库,AMiner希望通过技术的力量,让大家更加高效地获取知识。

ChatPaper:https://www.aminer.cn/chat/g

KDD顶会:https://www.aminer.cn/conf/5ea1b22bedb6e7d53c00c41b/KDD2023

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/羊村懒王/article/detail/356028
推荐阅读
相关标签
  

闽ICP备14008679号