The figure below illustrates a Recurrent Neural Network (or RNN) applied to a sequence of length 4 to generate another sequence of length 4. An RNN is made up of 'cells' that maintain an internal hidden state (or memory), here denoted by h. The output at each index (here denoted by y-hat) is a function of the hidden state at that index, via a matrix of weights W. The hidden state at each index is a function of the input (x) at that index (via a matrix of weights U) and the hidden state of the previous index (via a matrix of weights V), followed by a non-linearity. Matrices with the same letter are actually the same across time indices (that is, their weights are shared). RNNs process sequence elements one at a time, updating their cells' internal states as they do so. Thus elements early in the sequence can have an influence in a cell's output later in the sequence.