Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Ensembles of Approximators #532

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
han-ol wants to merge 22 commits into dev
base: dev
Choose a base branch
Loading
from ensembles
Draft

Ensembles of Approximators #532

han-ol wants to merge 22 commits into dev from ensembles

Conversation

@han-ol
Copy link
Collaborator

@han-ol han-ol commented Jul 4, 2025
edited
Loading

This draft-PR is the result of discussions with @elseml and @stefanradev93.

The goal is fast and convenient support of approximator ensembles and the first steps for this are taken already.

  1. We envision ApproximatorEnsemble as the abstraction at the heart of future workflows using ensembles.

    • fundamentally, it is a wrapper of a dictionary of arbitrary Approximator objects.
    • it overwrites the central methods compute_metrics, build, sample and passes inputs on to the respective ensemble member's methods.
  2. Since ensembles should cover the sensitivity wrt all randomness in approximators, which are not just initialization, but also the random order of training batches, we need slightly modified datasets.

    • For now only OfflineEnsembleDataset is implemented, which makes sure that training batches have an additional dimension at the second axis, containing multiple independent random slices of the available offline samples.

A few things are missing, among them are

  • predict/estimate methods for ApproximatorEnsemble (currently sample exists)
  • tests for ApproximatorEnsemble
  • doc strings for ApproximatorEnsemble
  • OnlineEnsembleDataset
  • DiskEnsembleDataset
  • tests for ensemble datasets
  • some Workflow

Copy link
Collaborator Author

han-ol commented Jul 4, 2025

Copy link

codecov bot commented Jul 4, 2025
edited
Loading

Codecov Report

❌ Patch coverage is 98.90110% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
bayesflow/approximators/approximator_ensemble.py 98.61% 1 Missing ⚠️
Files with missing lines Coverage Δ
bayesflow/__init__.py 49.05% <100.00%> (ø)
bayesflow/approximators/__init__.py 100.00% <100.00%> (ø)
bayesflow/approximators/approximator.py 79.54% <100.00%> (-0.46%) ⬇️
...low/approximators/model_comparison_approximator.py 85.79% <100.00%> (+0.59%) ⬆️
bayesflow/datasets/__init__.py 100.00% <100.00%> (ø)
bayesflow/datasets/offline_ensemble_dataset.py 100.00% <100.00%> (ø)
bayesflow/approximators/approximator_ensemble.py 98.61% <98.61%> (ø)

Copy link
Member

elseml commented Jul 9, 2025

Good job Hans! Fyi, commit 955ac79 uncovered a bug in ModelComparisonApproximator's build_from_data method which d8d84c8 addresses.

Kucharssim and han-ol reacted with thumbs up emoji

han-ol and others added 11 commits July 17, 2025 17:31
Known problem: When the approximators share networks/weights,
deserialization will fail. I'm not sure yet, maybe we can fix this by
looking at the way the weights are stored during saving.
Copy link
Collaborator

vpratz commented Aug 5, 2025

Nice work :) While trying it out a bit, the additional dimension for each ensemble member in the data caught me by surprise, and it took me a while to figure out why a dimension was missing from my data during training.
I'm not sure what the best interface would be here, but I think it would be good to think about the design for a bit. Maybe the following questions will get us closer to what we want to have:

  1. Should the ordinary dataset classes be supported? If yes, which mode should they operate in (does it need a warning?), if no, we might want to override fit to check for them and raise an error if they are passed.
  2. Should there be multiple "modes" for the ...EnsembleDatasets, i.e., identical data vs. different data? Might be especially relevant for online training, as different data increases the required compute.
  3. How do we handle custom dataset classes, what shapes do we expect from them?

As I did not take part in the discussions, maybe you already talked this through. In any way, I would be happy to hear your thoughts on this...

Copy link
Collaborator

vpratz commented Aug 5, 2025

I have added serialization support, but deserialization fails when multiple approximators use the same weights, e.g. when they share a summary network. I'm not sure yet how this can be resolved, and if we want to enable serialization if we can not resolve it...

Copy link
Contributor

@han-ol could you perhaps provide a minimal code example for how the ensembles should work? On that basis, it might be then also easier to discuss interface questions, including those of @vpratz.

Copy link
Collaborator

vpratz commented Aug 6, 2025

han-ol reacted with thumbs up emoji

Copy link
Collaborator

vpratz commented Aug 15, 2025

@han-ol and I had a discussion on the dataset question. One approach would be a general EnsembleDatasetWrapper with the following properties:

  • it takes in an arbitrary dataset, e.g. an instance of OnlineDataset or OfflineDataset
  • it determines the batch size either by reading dataset.batch_size or by sampling a batch from the dataset
  • it has a parameter like unique_data_fraction to control if all ensemble members get the same data (0.0) or every member gets different data (1.0). For in between values, a bootstrap procedure can be used.
  • the required number of simulations can be obtained by repeatedly sampling batches from dataset, or, for our own simulators, by changing the dataset.batch_size parameter. The latter would be a bit hacky, we would have to see if we want this.

In addition, in the approximator ensemble class, we can determine by the shape of inference_variables if a standard dataset (like OnlineDataset) was used. If so, we default to showing the same data to all ensemble members.

This gives us the possibility to use our existing datasets, and only requires this one additional class to add the capability to pass different data to different approximators.

Copy link
Contributor

I am strongly in favor of this idea. We also discussed in the past that this is a more elegant and catch-all solution.

vpratz reacted with thumbs up emoji

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

No reviews

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

AltStyle によって変換されたページ (->オリジナル) /