Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
forked from ardigen/MAT

The official implementation of the Molecule Attention Transformer.

License

Notifications You must be signed in to change notification settings

otsukaresama/MAT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

History

18 Commits

Repository files navigation

MAT

The official implementation of the Molecule Attention Transformer. ArXiv

architecture

Code

  • EXAMPLE.ipynb jupyter notebook with an example of loading pretrained weights into MAT,
  • transformer.py file with MAT class implementation,
  • utils.py file with utils functions.

More functionality will be available soon!

Pretrained weights

Pretrained weights are available here

Results

In this section we present the average rank across the 7 datasets from our benchmark.

  • Results for hyperparameter search budget of 500 combinations.

  • Results for hyperparameter search budget of 150 combinations.

  • Results for pretrained model

Requirements

  • PyTorch 1.4

Acknowledgments

Transformer implementation is inspired by The Annotated Transformer.

About

The official implementation of the Molecule Attention Transformer.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 80.2%
  • Jupyter Notebook 19.8%

AltStyle によって変換されたページ (->オリジナル) /