I have a text file with letters (tab delimited), and a numpy array (obj
) with a few letters (single row). The text file has rows with different numbers of columns. Some rows in the text file may have multiple copies of same letters (I will like to consider only a single copy of a letter in each row). Also, each letter of the numpy array obj
is present in one or more rows of the text file.
Letters in the same row of the text file are assumed to be similar to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the example given below, the letters j
and n
are in the same row (second row). Hence j
and n
have similarity metric value = 1.
Here is an example of the text file (you can download the file from here):
b q a i m l r
j n o r o
e i k u i s
In the example, the letter o
is mentioned two times in the second row, and the letter i
is denoted two times in the third row. I will like to consider single copies of letters rows of the text file.
This is an example of obj
:
obj = np.asarray(['a', 'e', 'i', 'o', 'u'])
I want to compare obj
with rows of the text file and form clusters from elements in obj
.
This is how I want to do it. Corresponding to each row of the text file, I want to have a list which denotes a cluster (In the above example we will have three clusters since the text file has three rows). For every given element of obj
, I want to find rows of the text file where the element is present. Then, I will like to assign index of that element of obj
to the cluster which corresponds to the row with maximum length (the lengths of rows are decided with all rows having single copies of letters).
import pandas as pd
import numpy as np
data = pd.read_csv('file.txt', sep=r'\t+', header=None, engine='python').values[:,:].astype('<U1000')
obj = np.asarray(['a', 'e', 'i', 'o', 'u'])
for i in range(data.shape[0]):
globals()['data_row' + str(i).zfill(3)] = []
globals()['clust' + str(i).zfill(3)] = []
for j in range(len(obj)):
if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]
for i in range(len(obj)):
globals()['obj_lst' + str(i).zfill(3)] = [0]*data.shape[0]
for j in range(data.shape[0]):
if i in globals()['data_row' + str(j).zfill(3)]:
globals()['obj_lst' + str(i).zfill(3)][j] = len(globals()['data_row' + str(j).zfill(3)])
indx_max = globals()['obj_lst' + str(i).zfill(3)].index( max(globals()['obj_lst' + str(i).zfill(3)]) )
globals()['clust' + str(indx_max).zfill(3)] += [i]
for i in range(data.shape[0]): print globals()['clust' + str(i).zfill(3)]
>> [0]
>> [3]
>> [1, 2, 4]
The code gives me the right answer. But, in my actual work, the text file has tens of thousands of rows, and the numpy array has hundreds of thousands of elements. And, the above given code is not very fast. So, I want to know if there is a better (faster) way to implement the above functionality and aim (using Python).
-
1\$\begingroup\$ What do you mean by this statement: "Letters in the same row of the text file are assumed to be similar to each other." \$\endgroup\$l0b0– l0b02019年01月03日 07:40:29 +00:00Commented Jan 3, 2019 at 7:40
-
1\$\begingroup\$ In general, try to explain what you're trying to do without reference to the actual variables in the code. \$\endgroup\$l0b0– l0b02019年01月03日 08:12:07 +00:00Commented Jan 3, 2019 at 8:12
-
\$\begingroup\$ @I0b0 : By the mentioned statement, I meant that the letters in the same row are related to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the given example 'j' and 'n' are in the same row, i.e. the second row. Hence 'j' and 'n' have similarity metric value = 1. \$\endgroup\$Siddharth Satpathy– Siddharth Satpathy2019年01月04日 17:37:58 +00:00Commented Jan 4, 2019 at 17:37
-
\$\begingroup\$ You should update the question to include this extra information. \$\endgroup\$l0b0– l0b02019年01月04日 20:09:50 +00:00Commented Jan 4, 2019 at 20:09
2 Answers 2
I can't understand your algorithm as written, but some very general advice applies:
- Use
format()
or template strings to format strings. - Rather than creating dynamic dictionary keys, I would create variables
data_row
,clust
(but see naming review below), etc. and assign to indexes in these lists. That way you get rid of the global variables (which are bad for reasons discussed at great length elsewhere), you won't need to format strings all over the place, and you won't need to do thestr()
conversions. You should also be able to get rid of the array initialization this way, something which is a code smell in garbage collected languages. - Can there really be multiple tab characters between columns? That would be weird. If not, you might get less surprising results using a single tab as the column separator.
- Naming could use some work. For example:
- In general, don't use abbreviations, especially not single letter ones or ones which shorten by only one or two letters. For example, use
index
(or[something]_index
if there are multiple indexes in the current context) rather thanindx
,idx
,i
orj
. data
should be something likecharacter_table
.- I don't know what
obj
is, butobj
gives me no information at all. Should it bevowels
?
- In general, don't use abbreviations, especially not single letter ones or ones which shorten by only one or two letters. For example, use
Do one thing at a time
Don't put multiple statements on one line, i.e.
if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]
Global population?
You're doing a curious thing. You're populating the global namespace with some variable names that have integral indices baked into them. Since I can't find a reason for this anywhere in your description (and even if you did have a reason, it probably wouldn't be a good one), really try to avoid doing this. In other words, rather than writing to
globals['data_row001']
just write to a list called data_row
(and obj_lst
, etc.). You can still print it in whatever format you want later.
Use fluent syntax
For long statements with several .
calls, such as this:
data = pd.read_csv('file.txt', sep=r'\t+', header=None, engine='python').values[:,:].astype('<U1000')
try rewriting it on multiple lines for legibility:
data = (pd
.read_csv('file.txt', sep=r'\t+', header=None, engine='python')
.values[:,:]
.astype('<U1000')
)