Follow the steps below to set up the environment and run the inference demo.
Clone the repository:
git clone git@github.com:rayray9999/Genfocus.git
cd GenfocusEnvironment setup:
conda create -n Genfocus python=3.12 conda activate Genfocus
Install requirements:
pip install -r requirements.txt
You can download the pre-trained models using the following commands. Ensure you are in the Genfocus root directory.
# 1. Download main models to the root directory wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/bokehNet.safetensors wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/deblurNet.safetensors # 2. Setup checkpoints directory and download auxiliary model mkdir -p checkpoints cd checkpoints wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/checkpoints/depth_pro.pt cd ..
Launch the interactive web interface locally:
Note: The project uses FLUX.1-dev. You must request access and authenticate locally before running the demo.
python demo.py
The demo will be accessible at http://127.0.0.1:7860 in your browser.
We are actively working on improving this project. Current progress:
- Upload Model Weights
- Release HF Demo & Gradio Code (with tiling tricks for high-res images)
- Release Benchmark data
- Release Inference Code (Support for adjustable parameters/settings)
- Release Training Code and Data
If you find this project useful for your research, please consider citing:
@article{Genfocus2025, title={Generative Refocusing: Flexible Defocus Control from a Single Image}, author={Tuan Mu, Chun-Wei and Huang, Jia-Bin and Liu, Yu-Lun}, journal={arXiv preprint arXiv:2512.16923}, year={2025} }
For any questions or suggestions, please open an issue or contact me at raytm9999.cs09@nycu.edu.tw.
Star π this repository if you like it!