Link of the Paper: https://arxiv.org/pdf/1409.3215.pdf

Main Points:

SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。
  1. Encoder-Decoder Model: Input sequence -> A vector of a fixed dimensionality -> Target sequence.
  2. A multilayered  LSTM: The LSTM did not have difficulty on long sentences. Deep LSTMs significantly outperformed shallow LSTMs.
  3. Reverse Input: Better performance. While the authors do not have a complete explanation to this phenomenon, they believe that it is caused by the introduction of many short term dependencies to the dataset. LSTMs trained on reversed source sentences did much better on long sentences than LSTMs trained on the raw source sentences, which suggests that reversing the input sentences results in LSTMs with better memory utilization.

 Paper Reading - Sequence to Sequence Learning with Neural Networks ( NIPS 2014 ) 人工智能

Other Key Points:

  1. A significant limitation: Despite their flexibility and power, DNNs can only be applied to problems whose inputs and targets can be sensibly encoded with vectors of fixed dimensionality.
扫码关注我们
微信号:SRE实战
拒绝背锅 运筹帷幄