I need a function that checks how different are two different strings. I chose the Levenshtein distance as a quick approach, and implemented this function:
from difflib import ndiff
def calculate_levenshtein_distance(str_1, str_2):
"""
The Levenshtein distance is a string metric for measuring the difference between two sequences.
It is calculated as the minimum number of single-character edits necessary to transform one string into another
"""
distance = 0
buffer_removed = buffer_added = 0
for x in ndiff(str_1, str_2):
code = x[0]
# Code ? is ignored as it does not translate to any modification
if code == ' ':
distance += max(buffer_removed, buffer_added)
buffer_removed = buffer_added = 0
elif code == '-':
buffer_removed += 1
elif code == '+':
buffer_added += 1
distance += max(buffer_removed, buffer_added)
return distance
Then calling it as:
similarity = 1 - calculate_levenshtein_distance(str_1, str_2) / max(len(str_1), len(str_2))
How sloppy/prone to errors is this code? How can it be improved?
3 Answers 3
There is a module available for exactly that calculation, python-Levenshtein
. You can install it with pip install python-Levenshtein
.
It is implemented in C, so is probably faster than anything you can come up with yourself.
from Levenshtein import distance as levenshtein_distance
According to the docstring
conventions, your docstring should look like this, i.e. with the indentation aligned to the """
and the line length curtailed to 80 characters.
def calculate_levenshtein_distance(str_1, str_2):
"""
The Levenshtein distance is a string metric for measuring the difference
between two sequences.
It is calculated as the minimum number of single-character edits necessary to
transform one string into another.
"""
...
-
15\$\begingroup\$ Just to note the module is licensed under GPL 2.0 so watch out if you're using it for work. \$\endgroup\$lucasgcb– lucasgcb2019年04月08日 13:08:05 +00:00Commented Apr 8, 2019 at 13:08
-
1\$\begingroup\$ Just to point out a small nitpick to other people who may stumble upon this answer, as per help center: "Every answer must make at least one insightful observation about the code in the question. Answers that merely provide an alternate solution with no explanation or justification do not constitute valid Code Review answers and may be deleted." While this answer does provide alternative and existing module suggestion, it also goes into some suggestions about improving code quality. So it's an example of a decent answer \$\endgroup\$Sergiy Kolodyazhnyy– Sergiy Kolodyazhnyy2019年04月09日 00:02:54 +00:00Commented Apr 9, 2019 at 0:02
-
\$\begingroup\$ Thanks! I did not know of this module. Will check it out \$\endgroup\$Kyra_W– Kyra_W2019年04月09日 08:21:50 +00:00Commented Apr 9, 2019 at 8:21
-
1\$\begingroup\$ @SergiyKolodyazhnyy While I (obviously) agree, and that is one of the reasons I added that part, I would actually argue that "It is implemented in C, so is probably faster than anything you can come up with yourself" would get around the "no explanation or justification" clause \$\endgroup\$Graipher– Graipher2019年04月09日 08:21:58 +00:00Commented Apr 9, 2019 at 8:21
-
2\$\begingroup\$ As a faster alternative to python-Levenshtein that is more open licensed github.com/maxbachmann/RapidFuzz (rapidfuzz.string_metric.levenshtein) can be used \$\endgroup\$maxbachmann– maxbachmann2021年02月17日 14:07:14 +00:00Commented Feb 17, 2021 at 14:07
The code itself is rather clear. There are some smaller changes I would make
tuple unpacking
You can use tuple unpacking to do:
for code, *_ in ndiff(str1, str2):
instead of:
for x in ndiff(str_1, str_2):
code = x[0]
dict results:
Instead of a counter for the additions and removals, I would keep it in 1 dict: counter = ({"+": 0, "-": 0})
def levenshtein_distance(str1, str2, ):
counter = {"+": 0, "-": 0}
distance = 0
for edit_code, *_ in ndiff(str1, str2):
if edit_code == " ":
distance += max(counter.values())
counter = {"+": 0, "-": 0}
else:
counter[edit_code] += 1
distance += max(counter.values())
return distance
generators
A smaller, less useful variation, is to let this method be a generator, and use the builtin sum
to do the summary. this saves 1 variable inside the function:
def levenshtein_distance_gen(str1, str2, ):
counter = {"+": 0, "-": 0}
for edit_code, *_ in ndiff(str1, str2):
if edit_code == " ":
yield max(counter.values())
counter = {"+": 0, "-": 0}
else:
counter[edit_code] += 1
yield max(counter.values())
sum(levenshtein_distance_gen(str1, str2))
timings
The differences in timings between the original and both these variations are minimal, and within the variation of results. This is rather logical, since for simple strings (aaabbbc
and abcabcabc
) 90% of the time is spent in ndiff
-
\$\begingroup\$ Awesome suggestions. I had not even considered the generator approach, but it looks very nice. Thanks \$\endgroup\$Kyra_W– Kyra_W2019年04月09日 08:24:30 +00:00Commented Apr 9, 2019 at 8:24
I used the dict code from https://codereview.stackexchange.com/a/217074 but if you don't want to implement the method yourself I recommend the edit distance method of nltk https://tedboy.github.io/nlps/generated/generated/nltk.edit_distance.html over Python-Levenshtein https://pypi.org/project/python-Levenshtein/ because of the licensing
-
\$\begingroup\$ This doesn't seem to review the code - perhaps it should be a comment instead? \$\endgroup\$Toby Speight– Toby Speight2022年07月08日 15:00:27 +00:00Commented Jul 8, 2022 at 15:00
difflib.ndiff
is not. Hence, the "distance" score is different if you swap the arguments. \$\endgroup\$