Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

JulesBelveze/time-series-autoencoder

Repository files navigation

LSTM-autoencoder with attentions for multivariate time series

Hits

This repository contains an autoencoder for multivariate time series forecasting. It features two attention mechanisms described in A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction and was inspired by Seanny123's repository.

Autoencoder architecture

Download and dependencies

To clone the repository please run:

git clone https://github.com/JulesBelveze/time-series-autoencoder.git
Use uv

Then install uv

# install uv
curl -LsSf https://astral.sh/uv/install.sh | sh # linux/mac
# or
brew install uv # mac with homebrew

setup environment and install dependencies

cd time-series-autoencoder
uv venv
uv pip sync pyproject.toml
Install directly from requirements.txt
pip install -r requirements.txt

Usage

The project uses Hydra as a configuration parser. You can simply change the parameters directly within your .yaml file or you can override/set parameter using flags (for a complete guide please refer to the docs).

python3 main.py -cn=[PATH_TO_FOLDER_CONFIG] -cp=[CONFIG_NAME]

Optional arguments:

 -h, --help show this help message and exit
 --batch-size BATCH_SIZE
 batch size
 --output-size OUTPUT_SIZE
 size of the ouput: default value to 1 for forecasting
 --label-col LABEL_COL
 name of the target column
 --input-att INPUT_ATT
 whether or not activate the input attention mechanism
 --temporal-att TEMPORAL_ATT
 whether or not activate the temporal attention
 mechanism
 --seq-len SEQ_LEN window length to use for forecasting
 --hidden-size-encoder HIDDEN_SIZE_ENCODER
 size of the encoder's hidden states
 --hidden-size-decoder HIDDEN_SIZE_DECODER
 size of the decoder's hidden states
 --reg-factor1 REG_FACTOR1
 contribution factor of the L1 regularization if using
 a sparse autoencoder
 --reg-factor2 REG_FACTOR2
 contribution factor of the L2 regularization if using
 a sparse autoencoder
 --reg1 REG1 activate/deactivate L1 regularization
 --reg2 REG2 activate/deactivate L2 regularization
 --denoising DENOISING
 whether or not to use a denoising autoencoder
 --do-train DO_TRAIN whether or not to train the model
 --do-eval DO_EVAL whether or not evaluating the mode
 --data-path DATA_PATH
 path to data file
 --output-dir OUTPUT_DIR
 name of folder to output files
 --ckpt CKPT checkpoint path for evaluation 

Features

  • handles multivariate time series
  • attention mechanisms
  • denoising autoencoder
  • sparse autoencoder

Examples

You can find under the examples scripts to train the model in both cases:

  • reconstruction: the dataset can be found here
  • forecasting: the dataset can be found here

AltStyle によって変換されたページ (->オリジナル) /