The following figure illustrates a multi-layer perceptron (MLP) with two hidden layers. An MLP is a fully connected feed-forward neural network made up of perceptrons. Fully connected means any given perceptron of any given layer (except the first layer -- which is the input) is connected to all perceptrons of the previous layer. Feed-forward means there's a clear ordering of layers and connections occur only between consecutive layers (i.e. there are no connection cycles or skip connections). Intermediate layers (all except the first and last) are called hidden layers, and its perceptrons are called hidden units. In the figure, there are N input units, M hidden units in the first layer, L hidden units in the second layer, and K units in the output layer. N and K depend on the application. M and L, as well as the number of hidden layers, are hyper-parameter choices.