Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

(Bug) Update sentence_encoder.py: clamping cos_sim between -1 and 1 to avoid floating point precision errors in torch.acos(cos_sim) #804

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Aniloid2 wants to merge 1 commit into QData:master
base: master
Choose a base branch
Loading
from Aniloid2:patch-1

Conversation

@Aniloid2
Copy link

@Aniloid2 Aniloid2 commented Sep 19, 2024

What does this PR do?

Summary

If we compare two equal embeddings, emb1 == emb2, the cosine similarity should be 1. However, due to floating point precision, we might end up with a value slightly greater than 1, such as 1.00004. This results in an undefined NaN in torch.acos(cos_sim), causing get_angular_sim to return NaN instead of 1. By using cos_sim = torch.clamp(cos_sim, -1.0, 1.0), we ensure that the cos_sim value remains within the valid range expected by torch.acos(cos_sim).

I am using TextAttack to perform attacks on LLMs. For testing, I mostly run custom attacks that lead to different embeddings, emb1 and emb2. Occasionally, my attacks do not change any words, but due to the internal randomness of LLMs during the attack search, performing a second inference step results in a misclassification. Since the two samples are the same but classified differently during the USE metric evaluation, they should result in a cosine similarity of 1. However, I am encountering NaN values after conducting USE evaluations. I found that the issue is due to floating-point precision.

Additions

  • Added a torch.clamp to avoid floating point precision errors

Changes

  • No changes

Deletions

  • No deletions made

Checklist

  • [ x] The title of your pull request should be a summary of its contribution.
  • [ x] Please write detailed description of what parts have been newly added and what parts have been modified. Please also explain why certain changes were made.
  • [ x ] If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people consulting the issue know you are working on it)
  • [ x ] To indicate a work in progress please mark it as a draft on Github.
  • [ x ] Make sure existing tests pass.
  • [ x ] Add relevant tests. No quality testing = no merge.
  • [ x ] All public methods must have informative docstrings that work nicely with sphinx. For new modules/files, please add/modify the appropriate .rst file in TextAttack/docs/apidoc.'

bterrific2008 reacted with thumbs up emoji
If we compare two equal embeddings, emb1 == emb2, the cosine similarity should be 1. However, due to floating point precision, we might end up with a value slightly greater than 1, such as 1.00004. This results in an undefined NaN in torch.acos(cos_sim), causing get_angular_sim to return NaN instead of 1. By using cos_sim = torch.clamp(cos_sim, -1.0, 1.0), we ensure that the cos_sim value remains within the valid range expected by torch.acos(cos_sim).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

No reviews

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

1 participant

AltStyle によって変換されたページ (->オリジナル) /