Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Recurrent Variational Autoencoder with Dilated Convolutions that generates sequential data implemented in pytorch

License

Notifications You must be signed in to change notification settings

xushenkun/vae

Repository files navigation

Pytorch Recurrent Variational Autoencoder with Dilated Convolutions

Model:

This is the implementation of Zichao Yang's Improved Variational Autoencoders for Text Modeling using Dilated Convolutions with Kim's Character-Aware Neural Language Models embedding for tokens

model_image

Most of the implementations about the recurrent variational autoencoder are adapted from analvikingur/pytorch_RVAE

Usage

Before model training it is necessary to train word embeddings:

$ python train_word_embeddings.py

This script train word embeddings defined in Mikolov et al. Distributed Representations of Words and Phrases

Parameters:

--use-cuda

--num-iterations

--batch-size

--num-sample –– number of sampled from noise tokens

To train model use:

$ python train.py

Parameters:

--use-cuda

--num-iterations

--batch-size

--learning-rate

--dropout –– probability of units to be zeroed in decoder input

--use-trained –– use trained before model

To sample data after training use:

$ python sample.py

Parameters:

--use-cuda

--num-sample

nohup ../python3/bin/python3 -u train.py & ../python3/bin/python3 -u sample.py --use-trained ./data/poem_trained_2100_RVAE --beam-size 50 --z-size 30

About

Recurrent Variational Autoencoder with Dilated Convolutions that generates sequential data implemented in pytorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%

AltStyle によって変換されたページ (->オリジナル) /