Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

createamind/vid2vid

Repository files navigation

fork from https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix

a.sh is train cmd

python train.py --dataroot ./dataset/ucf-npy/ --dataset_mode v --model pix2pix --which_model_netG unet_256 --which_direction AtoB --norm batch --niter 10 --niter_decay 10 --batchSize 1 --name virtualkittidepth --depth 7 --max_dataset_size 150 --output_nc 3 --input_nc 3 --gpu_ids 0 --data_dir ./dataset/UCF/v_BabyCrawling**.avi --load_video 1 ::run ok

Video2XYZ 已经实现: [vid2dep demo show] [vid2vid demo show]

[video generate demo: children]

正在推进: vid2sped vid2angle vid2action vid2reward vid2nlp ref: #3

更完善的期望是sensorfuse2representation, representation2XYZ, SF2Rpstt2XYZ,

对于progressGAN--pix2pixHD想法是 can train pred angle /2(左右) /4(大左小左 大右小右) /8 图像的越来越精细到功能训练的准确度越来越高。

!Curiosity-driven Exploration by Self-supervised Prediction

new ref https://github.com/createamind/keras-cpcgan

About

No description or website provided.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

AltStyle によって変換されたページ (->オリジナル) /