Skip to main content

Table 2 MAE results on the two datasets, with 24, 96 and 720 h forecast horizon

From: Transformer training strategies for forecasting multiple load time series

Model

Strat-

Input

Electricity

Ausgrid

egy

(days)

24h

96h

720h

24h

96h

720h

Informer (Zhou et al. 2021)

MV

4

0.399

0.407

0.450

0.582

0.607

0.645

Autoformer (Wu et al. 2021)

MV

4

0.289

0.317

0.361

0.579

0.569

0.592

FEDformer (Zhou et al. 2022)

MV

4

0.284

0.297

0.343

0.560

0.566

0.609

LSTM

MV

7

0.400

0.402

0.407

0.611

0.618

0.613

Transformer

MV

7

0.366

0.384

0.382

0.584

0.586

0.576

Persistence

L

0.279

0.279

0.447

0.647

0.647

0.717

Linear regression

L

14

0.203

0.233

0.296

0.496

0.524

0.565

MLP

L

7

0.199

0.236

0.308

0.499

0.532

0.567

LSTM

L

7

0.263

0.283

0.337

0.517

0.541

0.573

Transformer

L

7

0.256

0.289

0.354

0.535

0.563

0.583

LTSF-Linear (Zeng et al. 2022)

G

14

0.209

0.237

0.301

0.490

0.515

0.553

PatchTST (Nie et al. 2022)

G

14

0.190

0.222

0.290

0.468

0.494

0.522

LSTM

G

7

0.207

0.239

0.302

0.491

0.525

0.559

Transformer

G

14

0.184

0.225

0.312

0.482

0.514

0.533

  1. MV = multivariate, L = local, G = global. The best results are highlighted in bold and the best results per training strategy are highlighted in italic