Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Is Wasserstein Discriminant Analysis usable for non-toy datasets? #725

Unanswered
dherrera1911 asked this question in Q&A
Discussion options

I am trying to use the Wasserstein Discriminant Analysis implementation of POT, shown here https://pythonot.github.io/auto_examples/others/plot_WDA.html

I can reproduce the example in the link above with no problem. However, when I tried to apply the WDA implementation to MNIST, it doesn't complete any iterations and then the process is killed. In the original paper the authors use the method for MNIST, and report a low training time. So, I was wondering, whether this implementation is known not to work for larger scale data, or if I am missing something.

Code to reproduce below:

import torch
import torchvision
# Download and load training and test datasets
trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True)
# Scale data and subtract global mean
def scale_and_center(x_train):
 std = x_train.std()
 x_train = x_train / (std * n_row)
 global_mean = x_train.mean(axis=0, keepdims=True)
 x_train = x_train - global_mean
 return x_train
n_samples, n_row, n_col = trainset.data.shape
n_dim = n_row * n_col
x_train = trainset.data.reshape(-1, n_dim).float()
y_train = trainset.targets
from sklearn.decomposition import PCA
pca = PCA(n_components=6)
pca.fit(x_train)
pca_filters = pca.components_
from ot.dr import wda
Pwda, projwda = wda(x_train.numpy(), y_train.numpy(), p=6, reg=0.01,
 P0=pca_filters.T)
You must be logged in to vote

Replies: 1 comment

Comment options

Hello this is a good question.

The WDA paper was originally implemented in Matlab with all gradient computation done by hand and was quite optimized. The implementation in POT is a rewrite using autograd but was not tested on the experiments form the paper. It does not scale that well to large dataset (maybe we should implement an SGD on stiefel...) but in any case i suggest that you use parameters sinkhorn_method="sinkhorn_log" that is much more robust and should not have numerical problems (sinkhorn might not converge but it should work).

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
Converted from issue

This discussion was converted from issue #717 on March 20, 2025 07:50.

AltStyle によって変換されたページ (->オリジナル) /