Skip to main content

Table 5 Results of the hyperparameter tuning

From: Evaluation of neural networks for residential load forecasting and the impact of systematic feature identification

Model

Model parameters

Training parameters

Data parameters

FFN

Hidden layer size: 512

Number of layers: 1

Dropout probability: 0.2

Activation: Leaky ReLU

Batch size: 256

Optimizer: AdaGrad

Learning rate: 0.01

Gradient clipping: 0.1

Robust scaler

Hour of day

Day of week

24 lags

RNN

Hidden layer size: 1024

Number of layers: 2

No dropout

Activation: ReLU

Batch size: 256

Optimizer: Adam

Learning rate: 0.01

Gradient clipping: 0.1

Robust scaler

Hour of day

Day of week

24 lags

LSTM

Hidden layer size: 256

Number of layers: 2

No dropout

Activation: Tanh

Batch size: 256

Optimizer: Adam

Learning rate: 0.01

Gradient clipping: 0.1

Robust scaler

Hour of day

Day of week

24 lags

GRU

Hidden layer size: 256

Number of layers: 2

No dropout

Activation: Tanh

Batch size: 256

Optimizer: Adam

Learning rate: 0.01

Gradient clipping: 0.1

Robust scaler

Hour of day

Day of week

24 lags