Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

intsystems/relaxit

Repository files navigation

Just Relax It

Just Relax It

Discrete Variables Relaxation

Compatible with PyTorch Inspired by Pyro

Coverage_2 Coverage Docs

License GitHub Contributors Issues GitHub Pull Requests

"Just Relax It" is a cutting-edge Python library designed to streamline the optimization of discrete probability distributions in neural networks, offering a suite of advanced relaxation techniques compatible with PyTorch.

๐Ÿ“ฌ Assets

  1. Technical Meeting 1 - Presentation
  2. Technical Meeting 2 - Jupyter Notebook
  3. Technical Meeting 3 - Jupyter Notebook
  4. Blog Post 1
  5. Blog Post 2
  6. Documentation
  7. Tests
  8. Technical Report

๐Ÿ’ก Motivation

For lots of mathematical problems we need an ability to sample discrete random variables. The problem is that due to continuous nature of deep learning optimization, the usage of truly discrete random variables is infeasible. Thus we use different relaxation methods. One of them, Concrete distribution or Gumbel-Softmax (this is one distribution proposed in parallel by two research groups) is implemented in different DL packages. In this project we implement different alternatives to it.

๐Ÿ—ƒ Algorithms

๐Ÿ› ๏ธ Install Using uv (Recommended)

For Production

uv pip install relaxit

For Development

git clone https://github.com/intsystems/relaxit
cd relaxit
uv venv # create venv
source .venv/bin/activate # activate venv
uv sync # install all the dependencies
uv pip install -e . # make the relaxit package editable

To run tests:

uv run pytest tests/

To run Python scripts:

uv run python demo/vae_hard_concrete.py

To run notebooks:

uv run jupyter lab

โš’๏ธ Install Using pip

For Production

pip install -r requirements.txt

For Development

pip install -r requirements-dev.txt

๐Ÿš€ Quickstart

Open In Colab

import torch
from relaxit.distributions import InvertibleGaussian
# initialize distribution parameters
loc = torch.zeros(3, 4, 5, requires_grad=True)
scale = torch.ones(3, 4, 5, requires_grad=True)
temperature = torch.tensor([1e-0])
# initialize distribution
distribution = InvertibleGaussian(loc, scale, temperature)
# sample with reparameterization
sample = distribution.rsample()
print('sample.shape:', sample.shape)
print('sample.requires_grad:', sample.requires_grad)

๐ŸŽฎ Demo

Laplace Bridge REINFORCE in Acrobot environment VAE with discrete latents
Laplace Bridge REINFORCE VAE
Open In Colab Open In Colab Open In Colab

For demonstration purposes, we divide our algorithms in three1 different groups. Each group relates to the particular demo code:

We describe our demo experiments here.

๐Ÿ“š Stack

Some of the alternatives for GS were implemented in pyro, so we base our library on their codebase.

๐Ÿงฉ Some details

To make to library consistent, we integrate imports of distributions from pyro and torch into the library, so that all the categorical distributions can be imported from one entrypoint.

๐Ÿ‘ฅ Contributors

๐Ÿ”— Useful links

Footnotes

  1. We also implement REINFORCE algorithm as a score function estimator alternative for our relaxation methods that are inherently pathwise derivative estimators. This one is implemented only for demo experiments and is not included into the source code of package. โ†ฉ

AltStyle ใซใ‚ˆใฃใฆๅค‰ๆ›ใ•ใ‚ŒใŸใƒšใƒผใ‚ธ (->ใ‚ชใƒชใ‚ธใƒŠใƒซ) /