当前位置:   article > 正文

【计算机科学】【2019】基于一维卷积神经网络的时间序列分类_rnn一维序列分类

rnn一维序列分类

在这里插入图片描述

本文为挪威奥斯陆大学(作者:Sharanan Kulam)的硕士论文,共176页。

近年来,机器智能的研究取得了很大的进展,神经网络模型在图像分类、语言理解等领域都做出了重要的贡献。递归神经网络(RNN)通常是语言理解和时间序列分析等任务的首选方法。然而,一个已知的问题是它们捕捉长期依赖性的效率低下,从而产生了可替代的RNN。长短时记忆(LSTM)和选通递归单元(GRU)解决了这一问题,但其代价是计算量增大。因此,近年来卷积神经网络(CNN)被广泛应用于序列建模,其性能优于RNN。然而,这种效率仅通过少数比较研究来检验,其中大多数主要集中在语言任务上。类似的研究在时间序列分类领域更为缺乏,而传统的分类方法常常被使用。为了克服这一缺点,进一步了解CNN和RNN在时间序列分类领域的作用,本文对两种浅层网络CNN和LSTM进行了评估。我们通过实验方法扩展了现有的一些比较,并为时间序列分类领域提供了两者的基线比较,而这方面的研究几乎没有。

为此,我们创建了一个易于扩展的系统来进行实验,并使用交叉验证在三个不同的数据集上评估了我们的模型。我们利用运动能力对抑郁症患者进行分类、预测电动汽车的能量需求,并对足球运动员的准备状态进行分类。CNN和LSTM分别用于多个神经网络模型的比较研究。我们证明了简单的CNN可以达到与LSTM相同的性能,并且训练速度更快。对于我们的两个用例,CNN的速度快了30多倍,但是我们看到了训练时间和迭代之间的权衡,因为CNN使用了更多的训练迭代。我们的结论是,对于时间序列分类,CNN应该是LSTM的首选,因为它们在性能和更快的训练方面都是有效的。

In recent years, research in machine intelligence has gained increased momentum, where neural network models have made significant contributions in various fields, like image classification and language understanding. Recurrent neural networks (RNNs) are often the preferred approach for tasks like language understanding and time-series analysis. However, a known problem is their inefficiency to capture long-term dependencies, giving rise to alternative RNNs. Long short-term memory (LSTM) and gated recurrent units (GRU) solve this problem, but on the expense of computational effort. As a result, convolutional neural networks (CNNs) have been explored for sequence modelling in recent years and shown to outperform RNNs in general. This efficiency, however, is examined only by a few comparative studies, where most primarily focus on language tasks. Similar studies are far more absent in the time-series classification domain, where traditional methods are often used. To address this shortcoming and further understand the effects of CNNs and RNNs in the time-series classification domain, we evaluate two shallow networks in this thesis, a CNN and an LSTM. We extend the few existing comparisons through an experimental approach and provide a baseline comparison of both for the time-series classification domain, where such studies are almost absent. To do so, we created an easily extensible system for running experiments and evaluated our models on three different datasets using cross-validation. We classify depressed patients using motor activity, predict the energy demand of Electric Vehicles (EVs) and classify readiness of football players. The system was used to evaluate CNN and LSTM separately for each dataset and is generalisable for multiple neural network models that can be used for similar comparative studies. We show that simple CNN achieves the same performance as LSTM and is faster to train. For two of our use cases, CNN is more than 30 times faster in terms of seconds used, but we see a trade-off between training time used in seconds and iterations, as CNN uses more training iterations. We conclude that for time-series classification, CNNs should be the preferred choice over LSTM, because of their effectiveness in performance and faster training.

  1.   引言
    
    • 1
  2. 项目背景
  3. 研究方法
  4. 实验
  5. 结论

更多精彩文章请关注公众号:在这里插入图片描述

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小丑西瓜9/article/detail/354563
推荐阅读
相关标签
  

闽ICP备14008679号