Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
@gsarti
gsarti
Follow

Gabriele Sarti gsarti

📚
Learning
PostDoc @ BauLab & @ndif-team | @inseq-team core dev | Prev: @awslabs, @aindo-com

Highlights

  • Pro

Block or report gsarti

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
gsarti /README.md

Portfolio Huggingface Hub Twitter LinkedIn Google Scholar

I am a postdoc at the BauLab at Northeastern University, and a member of the NSF National Deep Inference Fabric (NDIF) team working on open-source interfaces for interpretability research. Previously, I was a PhD student at the University of Groningen GroNLP Lab and part of the Dutch InDeep consortium, where I wrote a thesis on actionable interpretability for machine translation. Before that, I was also an applied scientist intern at AWS AI Labs NYC, a research scientist at Aindo and a founding member of the AI Student Society in Trieste.

My research aims to bridge the gap between advances in interpretability research on large language models (LLMs) and their downstream applications for improving the transparency and trustworthiness of such models. I am also very passionate about open-source collaboration :octocat:, and I believe that good tools play a fundamental role in scientific discovery. For this reason, I participate in the development of NDIF's nnsight interpretability toolkit, and lead the development of inseq for attributional analyses of generative language models.

Pinned Loading

  1. inseq-team/inseq inseq-team/inseq Public

    Interpretability for sequence generation models 🐛 🔍

    Python 454 39

  2. pecore pecore Public

    Materials for "Quantifying the Plausibility of Context Reliance in Neural Machine Translation" at ICLR'24 🐑 🐑

    Jupyter Notebook 15 1

  3. it5 it5 Public

    Materials for "IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation" 🇮🇹

    Jupyter Notebook 30 4

  4. verbalized-rebus verbalized-rebus Public

    Materials for "Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses" at CLiC-it'24 🧩

    Jupyter Notebook 3 1

  5. covid-papers-browser covid-papers-browser Public

    Browse Covid-19 & SARS-CoV-2 Scientific Papers with Transformers 🦠 📖

    CSS 184 27

  6. qe4pe qe4pe Public

    Code for "QE4PE: Word-level Quality Estimation for Human Post-Editing" ✍️

    Jupyter Notebook 5 1

AltStyle によって変換されたページ (->オリジナル) /